About The Work Technical Marketing Field Notes
Demo Intelligence TDCI Framework Score a Demo
Connect
    ThinkRoot / Field Notes / Issue 02
Field Notes / Issue 02

Your New Teammate Is an AI Agent

You are a technical marketer. You have a positioning deck due in six weeks. One question before you hit send on that brief: do you know where that prompt goes?

Chad Corriveau
March 2026
10 min read
Score Your Demo
Data flowing through AI systems -- the governance question every technical marketer needs to ask
01 / The Liability

The question most people in this role have never seriously asked

If you work on a team where AI tools are already embedded in the workflow, and most teams are at this point, someone in your Slack has already asked why the draft is not in review yet. What used to be a generous runway is now a polite fiction. Tools like the ones we are about to discuss have compressed six-week timelines into days, sometimes hours, and the expectation has moved with the capability whether the process caught up or not.

So let us say the launch is real, the roadmap details are sensitive, and you have been sitting on a differentiation angle that your competitors have not figured out yet. You open your AI agent, paste the brief, and start typing.

Stop. One question before you hit send. Do you know where that prompt goes?

Most people in this role have never seriously asked that. And right now, with agentic AI woven into the daily workflow of nearly every enterprise marketing team on the planet, that unanswered question is not a minor gap. It is a liability. A specific one.

Technical marketers sit at a crossroads that almost nobody else in the company occupies: unreleased roadmap details on one side, competitive strategy on the other, and customer-facing output going out the door. No other non-executive role carries that combination all at once.

02 / The Instinct

The instinct you already have

I spent four years in the U.S. Navy as a Yeoman with a Top Secret clearance. The Yeoman rating was the Navy's administrative specialist billet: the person responsible for official correspondence, records management, and the handling of sensitive documentation at every level of the chain of command. The Navy disestablished the rating in 2016 and folded the responsibilities into the Logistics Specialist rating. More systems. More complexity. More moving parts requiring careful oversight. The job did not disappear. It grew into something that demanded more disciplined management, not less.

I think about that arc a lot when I look at what is happening to technical marketers right now.

The Yeoman job sounds administrative. In practice, it was about information stewardship at a level most people never experience. I learned fast that knowing how to handle a document was only part of it. I had to know what the document was, who had standing to see it, in what form it could leave the room, and what happened if it found its way to the wrong hands at the wrong moment.

Nobody handed me a complete list. I developed a classification instinct. I would look at a piece of information and I just knew: what it was, what it was not, and how carefully it needed to move.

That instinct is not about clearance levels. It is about a habit. Before sharing anything, ask: who needs to see this, in what form, and what happens if it ends up somewhere it should not?

That question transfers completely. Whether you are handling a classified document in 2003 or pasting a competitive brief into a frontier AI model in 2026, the underlying discipline is identical. The stakes are different. The instinct is the same.

03 / Three Tiers
AI GOVERNANCE TIERS / FIELD NOTES 02 TIER ONE Internal Agents Copilot / Internal GPT / Validated Infrastructure RISK Low / Verify config TIER TWO External Frontier Agents Claude / Gemini / ChatGPT RISK Med / Abstract first TIER THREE The Gray Zone Third-party AI features / Unknown data retention RISK High / Check terms CLASSIFICATION MODEL / THINKROOT

The three tiers you are already working with

Most enterprise marketing teams are not running one AI tool. They are running three or four at the same time, usually without a clear framework for which tool handles which type of work. Most companies have an AI handbook by now. Most of those handbooks were written before anyone seriously thought through what happens when a marketer pastes internal strategy into a frontier model at ten in the morning.

Here is a cleaner way to think about what you already have.

Tier One

Internal agents

Microsoft Copilot, internally deployed GPT instances, any AI running inside your company's own validated infrastructure. Lower risk for sensitive content because containment is the design intent. The tradeoff is real: internal agents are often limited in output quality compared to frontier commercial models. Before you treat any internal tool as fully contained, confirm with IT that the tenant configuration actually keeps your prompt data inside your organization's walls. Do not assume. Verify once, document it, move on.

Tier Two

External commercial agents

Claude, Gemini, ChatGPT. The frontier models that produce the strongest outputs for narrative work, positioning strategy, and content generation. The capability here is genuinely impressive. So is the governance responsibility that comes with it. When working with a frontier model on anything that touches internal strategy, abstract before you paste. Describe the product category without naming the unreleased feature. The agent needs the shape of the problem, not the proprietary details you have been protecting for six months.

Tier Three

The gray zone

This is where sensitive content actually leaks in enterprise organizations right now. Not through bad actors. Not through negligence. Through good marketers moving fast, producing genuinely strong work, and simply not asking where their content goes when they hit generate. Eleven Labs producing a voiceover from a script with unreleased product language in it. Figma AI features processing design prompts on third-party model infrastructure. AI editing tools with data retention policies buried six pages into a terms of service that nobody read before clicking accept.

Before you run sensitive content through any AI feature in a third-party tool, spend four minutes finding the data processing terms. If you cannot find them easily, treat the tool as external commercial and abstract before you use it. Four minutes is not a bureaucratic exercise. It is the classification instinct applied somewhere new.

04 / The Brief

The brief is the governance document

Stop thinking about AI governance as something that happens at the IT or legal level and then filters down to you eventually. Start thinking about it as something that lives inside how you write a brief.

Before you open any agent, four questions.

01

What is the sensitivity level of what you are about to share? Unreleased roadmap details, competitive differentiation, customer-specific data, and financial projections all carry different weights. Know what you have before you paste it.

02

Which tier is this agent operating in? That answer determines how much internal context belongs in the brief and how much needs to come out first.

03

What output are you asking for, and who will see it? A rough first draft for internal review carries different risk than a prompt producing the final customer-facing asset. Work backward from the audience of the output.

04

How will you review what comes back before it leaves the room? AI agents produce confident output. Confidently wrong, confidently off-brand, and occasionally surfacing something from a sensitive detail in your brief that should not have made the trip.

05 / The Agentic Shift

Why the discipline matters more than it does today

Multi-agent orchestration systems, frameworks where AI agents scope projects, assign subtasks to other agents, and execute complex workflows with a human reviewing the final output rather than each step, are already running inside engineering and product organizations at forward-leaning enterprises.

For technical marketers specifically, this changes the question. It moves from what do I put into a prompt to what do I authorize an agent to do on my behalf.

That is a meaningfully different kind of accountability. When an agent is not just helping you draft but actually executing parts of the workflow, the classification question moves upstream. You are no longer asking what goes into a single prompt. You are asking what the agent has standing to access, what it is authorized to act on without checking back, and where the human review checkpoint sits before output proceeds.

Define the access boundary. Build the review checkpoint in before you need it. Be explicit about what requires human judgment before anything leaves the team.

Every capability in the stack gets automated eventually. Research. Drafts. Competitive analysis. Campaign execution. All of it compresses. The thing that does not compress, the thing no agent inherits when you hand it a brief, is the judgment about what should be in that brief at all. That judgment requires context, consequence, and standing in the organization.

The question is not whether AI changes your role. It already has. The question is whether you understand what part of your role it cannot touch.

Questions this issue answers

Abstract before you paste. Describe the product category without naming the unreleased feature. Frame the differentiation angle without specifying the technical implementation underneath it. An AI agent does not need your roadmap to help you build a story around it. It needs the shape of the problem, not the proprietary details you have been protecting for months.

Tier One is internal agents with IT-validated containment, appropriate for your most sensitive strategic content but with real capability tradeoffs. Tier Two is external frontier models like Claude or ChatGPT, which produce the strongest outputs but require abstracting sensitive specifics. Tier Three is the gray zone of third-party tools with AI features embedded, where data retention policies are often buried and where most real leakage actually happens.

The classification instinct is the habit of asking, before sharing anything: who needs to see this, in what form, and what happens if it ends up somewhere it should not? It originated in military information stewardship but transfers completely to AI governance. Whether you are handling a classified document or pasting a competitive brief into a frontier model, the underlying discipline is identical.

The question shifts from what do I put into a prompt to what do I authorize an agent to do on my behalf. When an agent is not just helping you draft but actually executing parts of the workflow, you are asking what the agent has standing to access, what it is authorized to act on without checking back, and where the human review checkpoint sits before output proceeds.

First: what is the sensitivity level of what you are about to share? Second: which tier is this agent operating in? Third: what output are you asking for and who will see it? Fourth: how will you review what comes back before it leaves the room? The review step is not optional. AI agents produce confident output, and confidence is not the same as accuracy.

The practitioner tools behind this framework

ThinkRoot's Demo Intelligence platform applies the same structured judgment this issue describes to demo content quality.

Demo Intelligence

Score a Demo

Run your demo through the TDCI framework and get a structured coherence score across nine formats.

Open the Scorer
The Framework

What Is Technical Marketing

The practitioner definition of the discipline this issue is built inside, from someone who has done the job for twelve years.

Read the Guide
CC
Author

Chad Corriveau

Founder of ThinkRoot. Twelve years in Technical Product Marketing at ServiceNow. U.S. Navy veteran, Yeoman E-4, Top Secret clearance. Built narrative and AI governance frameworks for enterprise organizations across ITSM, SecOps, ITAM, and Agentic AI.

In this issue