HomeAboutThe WorkTechnical MarketingField Notes
Demo IntelligenceTDCI FrameworkScore a Demo
Connect
The Root Cause
On AI Narrative / Issue 04 / April 2026

The Horseless
Carriage
Problem

The best framework the automation world has right now is measuring AI in the wrong unit entirely.

Read Issue 04
thinkroot.io

This morning a Linkedin post stopped my scroll.

A framework has been making the rounds. It came from researchers Hamza Farooq and Jaya Rajwani, published in Lenny Rachitsky's newsletter. A former product executive with a decade at ServiceNow shared it this morning. The same framework came up weeks earlier in a director-level interview I had at a security software company.

When something moves that fast across that many different contexts, it deserves a careful read.

The framework breaks AI automation into three categories.

It is clean. It gives practitioners something to hold onto. And in a market where every Zapier workflow is getting relabeled as an agent, the instinct to create signal from noise is exactly right.

I know this framing well. A version of it was on the mainstage at a major enterprise tech conference not long ago. I helped build that story. I delivered it to a room of thousands of people. And watching it now calcify into received wisdom is exactly what made me stop scrolling this morning.

This is not a critique from the outside. It is a reckoning from the inside.

Diagram 01 of 03
The Current Frame
How most of the automation and platform world currently categorizes AI. A useful diagnostic for today.
Organized by architecture
Category 1
Deterministic Automation
Example
A support system that troubleshoots issues, accesses account history, and resolves problems without human handoff.
Why this category
Steps are known. You just need AI to understand content and follow the flow.
Category 2
Reasoning and Acting Agents
Example
A code review agent that checks for vulnerabilities, suggests optimizations, and updates documentation across repos.
Why this category
You cannot map every change in advance. The system needs to reason through it.
Category 3
Multi-Agent Networks
Example
A global retailer where sales, inventory, and logistics must coordinate to fulfill orders across regions.
Why this category
No single agent owns the full workflow. Coordination across systems is required.
Framework: Farooq and Rajwani via Lenny Rachitsky's Newsletter  |  Organized by architectural complexity

Why the "platform", PAAS and SAAS world loves this frame

If you built your career in workflow automation, this framework feels like home. ServiceNow, Salesforce, SAP, and every major platform company trained a generation of practitioners to think in flows. Define the steps. Define the conditions. Build the workflow. Repeat.

That is a real skill. It solves real problems. And when AI entered the picture, the natural instinct was to ask where the AI step goes in the flow.

The three categories answer that question reasonably well. They tell you how sophisticated your AI step needs to be. That is useful if you are asking the right question.

Here is where I started pulling at the thread.

A different kind of conversation

Yesterday I had coffee with Miles DeBenedictis. Miles is a PhD student in AI Ethics and CEO of Nomion AI. He is also a pastor, which is more relevant to this conversation than it might sound.

His argument is that the AI debate is fundamentally an anthropology debate. A question about what it means to be human, about agency, about who or what gets to set the direction of a life or an organization. Technologists, for all their capability, are among the least prepared people to answer it.

That perspective changes how he sees what engineers get wrong.

The pattern Miles kept coming back to was this: when engineers encounter a fundamentally new capability, they reach for their existing habits first. They optimize for the code path they know. They design around constraints they are used to working within.

What they often miss is the output they are actually trying to reach. Not the process. The goal.

I saw the framework post this morning and thought about that conversation the entire time I read it.

We have been here before

When the automobile arrived, people called it a horseless carriage. That was not just a naming problem. It was a thinking problem with real consequences. Roads were designed like horse paths. Laws were written around animal traffic speeds. Engine output was measured in horsepower, using the thing being replaced as the unit of comparison.

Nobody was being lazy or careless. They were doing exactly what humans do when something genuinely new arrives. They reached for the nearest familiar frame and measured the new thing against it.

The three-category framework does the same thing with AI. It measures capability in workflow-power. How many steps. How much reasoning. How many agents coordinating. Real measurements. Wrong unit.

What is changing is not how workflows are structured. It is the requirement for human cognitive presence at each decision point. When that requirement shifts, everything downstream shifts with it. The frame that made sense for automation does not hold for what comes next.

This is the horseless carriage moment for AI. And like that moment, the people most confident in the current frame are often the ones with the most invested in it.

I include myself in that. I helped build the carriage.

The variable nobody is asking about

The current framework organizes around architecture. How complex is the workflow. How much does the system reason. How many agents are coordinating.

Those are real distinctions. But they describe how a system is built. The question that changes everything downstream is different.

Who owns the goal?

That one question tells you more about what you are working with than any category label. And it changes everything about how you position, demo, and compete.

Diagram 02 of 03
A Better Frame
Architecture is not the variable that changes everything. Goal ownership is. The question the three categories never ask.
Organized by goal origin
Human owns everythingSystem participates in the goal
Orientation 01

Human defines the path

Goal and method are both specified. The system executes exactly what it was told. The ceiling is permanent and human-set.

Familiar territory
Orientation 02

Human defines the destination

The goal is specified. The system finds its own route. Edges are navigated, not anticipated. Humans still own the why.

Where most AI lives today
Orientation 03

System participates in the goal

The system surfaces problems humans did not know to ask about. It does not just answer the question. It questions the question.

Emerging now
Orientation 04

System originates goals

The system identifies objectives, generates approaches, and adapts them over time. Human judgment sits at review, not design.

The horizon being built toward
Ask these before the demo, the battlecard, or the analyst briefing
Who set the goal?

If only a human could have set it, you are still in the old frame.

What happens at the edge?

Does the system stop, or reason through what was not anticipated?

What changes tomorrow?

Does it learn from what it did, or start identical every time?

When a human defines the path and the destination, you are in familiar territory. The workflow is the product. Features are stable. The competitive frame holds.

When a human defines only the destination and the system finds its own route, the old tools start to crack. Edges get navigated rather than anticipated. The system does not stop where you expected it to stop.

When the system starts participating in what the destination should be, surfacing problems the human did not know to ask about, you are in territory the three categories have no language for.

And where things are heading right now is a system that identifies objectives, generates approaches, and adapts as it goes. Human judgment sits at review, not design.

That is not a more complex workflow. That is a different relationship between humans and thinking work entirely.

What this means if you are in technical marketing

Most enterprise AI demos right now are selling a Category 3 story. Multi-agent networks. Super agents. Human-managed autonomy. What actually shows up on screen is Category 1. Deterministic automation with an AI label on the routing step. Buyers feel that gap even when they cannot name it. It is why so many AI demos land with polite interest and no urgency.

But the bigger risk runs the other direction. If you are working with a system that genuinely operates beyond what the three categories can describe, and you position it using architecture language, you have done something worse than underclaiming. You handed your competitor the frame. You made your product comparable to something it is not actually comparable to.

Here is the dimension almost nobody is talking about. The human role changes too. In a deterministic workflow, the human is a checkpoint operator. They approve steps at defined moments in a defined process. That role was stable because the workflow was stable.

In a goal-oriented system, the human becomes a judgment architect. They are no longer approving steps. They are defining the boundaries within which the system operates, evaluating what it surfaces, and deciding when to expand or contract autonomy as trust builds. That is a fundamentally different job. And most enterprise buyers have not reckoned with what that means for their people, their org design, or their governance model.

The demo that shows a system doing something autonomously without showing how the human role evolves around it is telling half the story. The half that closes deals is the other one.

Before you build the demo, write the battlecard, or prepare the analyst briefing, sit with these three questions.

Who set the goal in what I am about to show? If only a human could have set it, you are still inside the old frame.

What happens at the edge of what was designed? Does the system stop, or does it navigate through what was not anticipated?

What does this system do tomorrow that it could not do today? Does it learn and adapt, or does it start identical every time?

Those questions tell you more about what you are working with than any category label. They also tell you whether the story you are about to tell is the right story for what the product actually is.

Diagram 03 of 03
The Demo Story Has to Change
Positioning a goal-oriented system with architecture language underclaims it, boxes it, and hands competitors the frame. Here is what shifts.
For technical marketers
Old demo arc
New demo arc
Open
Here is the problem

Defined as workflow pain. Too many steps. Too much manual effort. Too slow.

Show
Here is the feature

The product automates the steps. Look at how it routes, classifies, and resolves.

Walk
Here is how the workflow runs

Step by step. The system does what was designed. Every edge was anticipated.

Close
Here is where we fit

Category comparison. Competitive matrix. Feature checklist. Price per seat.

Open
Here is the ceiling that existed

Every workflow was only as smart as the person who built it. That ceiling was real. What changed is not that it disappeared. It is what now sets the limit.

Shift
Here is what navigating toward the outcome looks like

The system finds paths that were never defined. The demo moment is not the path. It is what the system surfaces on the way there.

Show
Here is what it found that you did not know was there

The system surfaced a dimension of the problem you did not know existed. Not because you told it to look. Because the obstacle was in the way of the outcome.

Close
Here is how the human role changes

Not a feature list. A trust architecture that grows. The human moves from checkpoint operator to judgment architect. That trajectory is the sale.

Let me get ahead of the pushback

I know what some of you are thinking.

The first reaction is about intent. The researchers who built this framework were not trying to define the ceiling of AI forever. They were trying to cut through hype for practitioners who were drowning in it. That is fair. I get it. But here is what happens to every useful idea that spreads this fast. The authors' intentions stop mattering. The framework gets used in hiring decisions, strategy sessions, and product roadmaps by people who never read the original piece. At that point it is not a diagnostic anymore. It is a default. And defaults become ceilings.

The second reaction is more practical. Most teams are still struggling to execute Category 1 reliably. Why are we talking about goal orientation when people cannot even get basic automation to work? That lands. It is a fair objection. But the mental model you start with is not neutral. It shapes every decision downstream. The governance language, the architecture choices, the org design. If you build those things inside a deterministic frame, you do not get to just swap in a new frame later. You rebuild. Starting with the right mental model is not a luxury. It is the cheaper path.

The third one is the one I find most interesting. Vendors are already pitching agentic AI and goal-oriented capability everywhere you look. If the people at the top are already thinking beyond the three categories, is this whole argument already settled?

Not even close.

There is a gap most people are not talking about. The vendor is pitching Orientation 03 and 04, systems that participate in setting goals and surface problems you did not know to ask about. The organization on the other side of that conversation has spent years being trained to think in deterministic flows. They are still living in Orientation 01, where a human defines every step. That is not a technology gap. It is a conceptual one. And it shows up everywhere. In the demo that undersells what the product actually does. In the battlecard that draws the wrong comparison. In the analyst briefing that boxes a genuinely goal-oriented product into a workflow-complexity story because that was the rubric everyone agreed on.

That gap is exactly where deals stall. And it is exactly what this issue is about.

There is more to this story. Next month I want to talk about something less comfortable. Enterprises did not just fail to understand goal-oriented AI. Many of them actively built against it. The governance language, the approval gates, the human-in-the-loop architecture that got baked into products and org design over the last decade. It felt like safety. In many cases it was anxiety management dressed as governance. And now the same companies are trying to sell agentic capability into the organizations they spent years teaching to distrust autonomy. The cage and the key are being sold by the same hand.

That is a harder conversation. It is coming in May.

And if you are asking what all of this means for how you actually build the demo, structure the analyst briefing, or walk into an evaluation room differently, that is its own conversation. One worth having properly. That is coming too.

We called the automobile a horseless carriage because we could not yet see what it actually was. We measured it against the thing it was replacing. We built roads for it that were still designed for horses.

The three-category framework is our horseless carriage language for AI. It is not wrong. It is just not big enough for what is coming down the road.

Think from the root.

Chad thinkroot.io
A note for those who want to follow this thread further

Miles DeBenedictis published something this week that approaches the same moment from a completely different direction. His historical anchor is the collapse of the New Bedford whaling industry. His question is about the people on the other side of the displacement, what happens to the workers whose expertise the new economy has no category for. We ended up in similar places from very different starting points. We figured that out over coffee, before either of us had written a word. That felt worth naming.

pastormiles.substack.com

Issue 03: The Three-Phase AI Arc All Field Notes

Get each issue
when it drops.

No cadence promises. No filler content. Published when there is something worth saying to the people who need to hear it.

Subscribe via Beehiiv →Free. No schedule. Just signal.