About The Work Technical Marketing Field Notes
Demo Intelligence TDCI Framework Score a Demo
Connect
ThinkRoot Practitioner Framework
Technical Marketing Standard

The Demo
Coherence
Index

The quality standard technical marketing has never had.

Eight demo categories. Weighted scoring dimensions. Honest failure mode taxonomy. Version 1.1 defines what good looks like across every format a technical marketer owns. Full dimension tables, research citations, and Technical Integrity standard now published on this page.

8 Demo Categories
6 Production Methods
v1.0 Published 2025
Download the Framework
ThinkRoot Demo Coherence Index

The full framework document. Free. No paywall. Email only so we know who the practitioners are.

No spam. No drip campaign. One newsletter if you want it.
+
Framework incoming.

Check your inbox. If you want the newsletter alongside it, therootcause.beehiiv.com has you covered.

Download PDF directly
TDCI Category Overview v1.0 / 2025
EVENT
Keynote
Emotional impact + memorability
A
DIGITAL
Video Teaser
Technical value in under 3 minutes
B
CONV.
Explainer
Comprehension + CTA conversion
C
CONF.
Product Session
Right depth for the room
B
PLG
Interactive / Sandbox
Aha moment without hand-holding
C
TECH
Technical Deep-Dive
De-risk the technical decision
A
SALES
Live SE Demo
Deal progression to POC
B
ANALYST
Analyst Briefing
Shift Quadrant / Wave positioning
A
01. The Problem
No standard has ever existed.

Enterprise software companies spend significant resources building technical marketing demos. They review them against internal comfort, not buyer impact. A keynote gets applause from stakeholders and produces no pipeline movement. That is a failure that looks like success.

02. The Failure Mode
Format mismatch is the root cause.

Content built for one purpose delivered in a context requiring something different. A product session running like a keynote loses its technical audience. A teaser without specific proof in the first 20 seconds loses its viewer. These are not execution failures. They are framework failures.

03. The Fix
Eight formats. Distinct criteria for each.

The TDCI defines eight demo categories, each with its own scoring dimensions, weights, and failure mode taxonomy. A keynote is not evaluated on the same criteria as an analyst briefing. The framework treats each format as a distinct discipline because it is one.

The Framework

Eight categories.
One coherent standard.

Every category has a defined audience, a defined purpose, and a defined set of scoring criteria. The dimensions shift with the format. So does the scoring philosophy.

EVENT
Keynote
3-7 min
Emotional impact. A keynote without a wow moment is a product session that wandered onto the main stage.
DIGITAL
Video Teaser
90 sec - 3 min
Polished, fast, unforgiving. Generic SaaS language is a critical failure at this format.
CONVERSION
Explainer
5-7 min
Serves VP buyer and practitioner simultaneously. Losing either one is a failure.
CONFERENCE
Product Session
15-20 min
Depth calibration is the entire game. The workhorse format of technical marketing.
PLG
Interactive / Sandbox
Self-paced
Aha moment reachable without hand-holding. Every extra step to aha is a dropout.
TECHNICAL
Technical Deep-Dive
45-90 min
Not meant to wow. Meant to de-risk. Scoring is nearly inverted from keynote.
SALES
Live SE Demo
30-45 min
Discovery alignment is the dominant dimension. Demoing what the SE knows vs. what the prospect asked about is the most common failure.
ANALYST
Analyst Briefing
15-30 min
Generic category claims score zero. Analysts have seen every vendor pitch. Specific differentiation is the only currency.
Scoring Philosophy
A score of 50 to 65 is mediocre. The TDCI does not soften this. The purpose of a score is to identify where improvement is required, not to provide encouragement.

Scoring criteria for
every format.

Each format has its own weighted dimensions, defined failure modes, and scoring rationale grounded in external research and practitioner experience. Select a format to see the full breakdown. Weights reflect observed patterns in what separates demos that achieve their stated goal from those that do not.

SALES
Live SE Demo
KPI: Deal progression to POC Length: 30-45 min Audience: Mixed buying group
Dimension / RationaleCommon Failure ModeWeight
Discovery Alignment
Strongest single predictor in Gong's 67,149-demo study, corroborated across 3M+ aggregate recordings. Maps to what was learned in discovery including competitive displacement context. Who are we replacing, and does the demo address that specifically? If the demo fails here, nothing else matters.
Demoing what the SE is comfortable with rather than what the prospect asked about in discovery.
25%
Narrative Structure
Upside-down pyramid: outcome first, context second, workflow never. Gong data confirms demos mapping presentation order to prospect-perceived importance were the most successful. Do the Last Thing First (Peter Cohan, Great Demo!).
Linear walkthrough saving the most relevant use case for the end. High-ranking attendees leave before the best material.
20%
Dialogue Ratio
No winning demo in Gong's original 67,149-demo dataset had more than 76 seconds of uninterrupted pitching. 65:35 talk-to-listen is the winning ratio. Speaker switch rate increases 36% in the second half of winning calls. A demo is a conversation, not a presentation.
Monologue presentation with no exchange. Presenter talks. Prospect listens. Deal stalls.
20%
Feature Discipline
Winning reps spend 39% less time on features than average performers. 9-minute rule for core product demonstration. Showing capabilities not discussed in discovery actively reduces win rates. Solve exactly — no more, no less.
Showing every capability rather than the three capabilities the prospect asked about. Feature overload after a strong opening.
15%
Technical Integrity
TMMs are accountable for every claim shown. Does every capability reflect something the audience can access, deploy, or verify? Figma in an SE demo context: capped at 40% of dimension maximum. Roadmap with explicit ship date and customer validation: partial credit.
Roadmap functionality shown without disclosure. The architect asks for a POC and discovers the demo showed a wireframe.
15%
Production Method Fit
Recommended: Live Product or Captured Simulation. Does the production choice signal confidence in the product's ability to stand on its own?
Figma or screenshot tour in a technical evaluation context where the audience can validate claims.
5%
The Standard
The vendor perceived as doing a superior job with discovery is in a competitively advantageous position. Discovery Alignment is the mechanism by which every other dimension earns its relevance. A technically perfect demo that addresses the wrong use case has failed before the first screen appears.
PLG
Interactive / PLG Sandbox
KPI: Aha moment reachable without hand-holding Length: Self-paced Audience: Self-qualifying user
Dimension / RationaleCommon Failure ModeWeight
Hook Architecture
Nothing else matters if belief does not arrive before attention is lost. For AI-native products in 2026, the aha moment should be demonstrable in a single query result. The question is not "did you arrive under 3 minutes" but "did the architecture create conviction before the audience disengaged?"
5+ step guided tour before first meaningful interaction. User reaches settings before seeing the product do anything.
30%
Friction Count
Every extra step between entry and aha moment is a documented dropout risk. PLG benchmark: if fewer than 40% of users hit the aha moment in week one, revisit onboarding. You can count the clicks. This is the most directly measurable dimension in the format.
Requiring email verification, account configuration, or data import before showing value. Product asks for trust before earning it.
20%
Guided Path Quality
80% of companies with 50%+ activation rates use deliberately designed onboarding. The path to aha must be architected, not assumed. Tooltips, progressive disclosure, contextual prompts — this is the TMM's primary craft contribution to a PLG motion.
No guidance, no checkpoints, no signal about what to do next. User abandons because the product expected too much prior knowledge.
15%
Technical Proof Specificity
Audience-path calibrated: VP buyer needs story-first proof (five screens that make them say "you solved this"), technical evaluator needs accuracy-first proof. Wistia: educational and instructional content outperforms all other types at similar lengths. Generic demos score zero.
Same tour served to VP and architect. Neither audience gets what they came for.
15%
Motion Fit / Technical Integrity
Production method must match the company's GTM motion and honestly represent the product's current state. Polished brand motion: Storylane/Navattic appropriate. Technical validation motion: raw sandbox appropriate. Figma in any self-service PLG context is an automatic fail. The user is alone when they hit the wall between demo and reality.
Figma or prototype in PLG context. Trust failure is instantaneous and unrecoverable with no SE in the room to explain.
20%
The Standard
The GTM motion determines the scoring standard. The TDCI scores whether the demo's architecture is optimized to get its specific audience to belief as fast as the product type allows, not whether it hit an arbitrary time target. For AI-native products, three minutes without a belief moment is a failure.
DIGITAL
Video Teaser
KPI: Technical value communicated before attention is lost Length: 90 sec - 3 min Audience: Website visitor, social feed scroller
Dimension / RationaleCommon Failure ModeWeight
Hook Speed
Wistia (100M+ videos): 17.3% engagement loss in the opening 2% of a 5-10 minute video. Short-form engagement dropped 10% year-over-year in 2024 as audience expectations for immediate value keep compressing. For AI-native products, the hook is a result, not a feature statement.
Scene-setting or brand intro where proof should be. Viewer is already gone before the value lands.
30%
Script-Visual Sync
Every second of screen time earns its place. Narration and visuals must reinforce the same point at the same moment. The most common production failure in teasers built without a TMM: voiceover says "intelligent automation" while screen shows a settings menu.
Narration and visuals telling different stories. Viewer's attention splits. Message fails to land.
25%
Technical Proof Fit
Includes Multi-Persona Coherence: the teaser must earn the time-poor executive (one clear outcome in 15 seconds), satisfy the practitioner's technical pattern-match (one specific proof point by 45 seconds), and give both something specific enough to carry into an internal conversation. Wistia: educational content outperforms all other types at similar lengths.
Generic SaaS language that could describe any product. "Streamline your workflow" is not a proof point.
25%
CTA Sharpness
The teaser's job is the handoff, not the close. Wistia: CTAs at end of video have highest conversion rate. A strong CTA names the specific next step: "See how we handle 10M events in real time. Request a technical deep-dive." "Learn more" fails the format's KPI entirely.
"Learn more" or no CTA. The viewer understood the product and had nowhere to go with that understanding.
20%
The Standard
Polished production is the correct choice for this format and the TMM's accountability is the technical specificity within the polish. A beautiful teaser that could describe any product in the category scores zero on Technical Proof Fit regardless of production value.
EVENT
Keynote / Main Stage
KPI: Emotional impact and memorability Length: 3-7 min Audience: Room of hundreds to thousands
Dimension / RationaleCommon Failure ModeWeight
Emotional Impact
Audiences form their event value impression within the first five minutes of the opening keynote. Six months later, the keynote is remembered more vividly than any other event element. Storytelling activates both language processing and experiential memory simultaneously (Journal of Cognitive Neuroscience). A keynote that fails to move the room fails its primary purpose regardless of technical accuracy.
Technically accurate but emotionally inert. Correct information delivered without a reason for the audience to care.
25%
Narrative Arc
Co-equal with Emotional Impact because structure makes emotional resonance repeatable, not accidental. Apple Standard: opens with a problem the audience already feels before the product is revealed. Tension, resolution, inevitability. No arc = no memory.
No discernible arc. Slides happen in sequence. No tension, no resolution, no moment where the product feels inevitable.
25%
Technical Credibility
Apple Standard: beautiful, honest, ships within a credible window. The hallway conversation happens immediately after. Three-layer standard: current product full integrity, roadmap shown as roadmap with a date, vision shown as vision with honest framing. Claims at keynote scale create credibility debt at keynote scale.
Figma or prototype shown as current product across thousands of simultaneous prospects. Creates compounding credibility debt in every downstream SE conversation.
20%
Visual Production
Production quality is part of the proof at keynote scale. A rough demo on the main stage signals the team did not care enough to polish what they are asking thousands of people to believe in. Unlike the Technical Deep-Dive, in a keynote polish signals commitment, not evasion.
Slides as safety blanket. Live environment risk not managed. Lag or navigation errors visible at scale.
20%
Memorability
Corporate Visions (150K+ decisions): 46% recall with deliberate repetition vs 10% without. Specific detail increases recall 24% with executive audiences. Memorability is an output of the preceding four dimensions, not an independent variable. Weighted lowest because you cannot engineer it independently.
Nothing quotable. Nothing the audience can repeat in the hallway or in a team meeting the next morning.
10%
The Apple Standard
The benchmark for a technical marketing keynote: beautiful enough to earn belief, grounded enough to survive the hallway conversation, and honest enough that the audience could deploy what they saw within a timeframe you would be willing to announce. That is what Apple has demonstrated for 25 years. It is achievable. It is the standard.
ANALYST
Analyst Briefing
KPI: Shift Quadrant or Wave positioning on vision and execution Length: 15-30 min Audience: Analyst who has briefed every vendor in the category
Dimension / RationaleCommon Failure ModeWeight
Execution Evidence Quality
Demo shows shipped functionality only. Figma is disqualifying here, not penalized. Analysts write defensible paragraphs from what they can verify. Every requirement box must be checked with genuinely dynamic feature demonstrations. A former Forrester Wave analyst: every paragraph they wrote needed evidence, customer case studies, specific data, named references.
Screenshot or prototype passed off as current product. Analyst notes the gap. It compounds across MQ cycles.
25%
Competitive Context Awareness
Was the briefing built outside-in or as an inside-out checklist exercise? The inside-out approach: find features that map to RFI criteria, present them. Technically complete, strategically inert. The outside-in approach: how would Atlassian or BMC frame this category? The analyst is explicitly comparing. The briefing must reflect that awareness.
Team builds agenda around what they want to show. No evidence the team studied how category leaders position the problem.
20%
Customer / Peer Validation
The most underweighted lever in most briefing programs. Gartner Peer Insights reviews, named reference customers willing to take analyst calls, outcome metrics with customer names attached move Ability to Execute in ways the demo alone cannot. As of 2024, Forrester Wave market presence is determined by customer feedback, not market share.
Generic customer count without named references. Analyst cannot validate claims independently.
20%
Differentiation Specificity
Generic category claims score zero with an analyst who briefed three competitors that morning. "AI-powered platform" is a category claim. "Native AI inference at the workflow layer with three reference customers at 10M+ daily transactions" is differentiation with specificity.
"Industry-leading AI capabilities" with no named competitor comparison and no customer validation of the specific claim.
15%
Vision Credibility
Roadmap presented separately via PPT with specific ship dates and early access customers. Roadmap items without ship dates are liabilities, not assets. Analysts remember what you said last year. "H2 2026" with no beta customers is weaker than "Q2 2026, currently in early access with four customers."
Roadmap without ship dates or customer validation. Same slides as last year's briefing.
10%
Technical Integrity
Analysts have institutional memory across MQ cycles. Claims made without evidence create debt that compounds. Does every claim survive direct questioning three months later at the inquiry call?
Claims from prior cycle that did not ship, now repositioned without acknowledgment.
10%
What the TDCI Scores vs. What It Does Not
Analyst positioning reflects multiple inputs including commercial relationships, peer validation volume, and the quality of the briefing itself. The TDCI scores what practitioners directly control: evidence quality, differentiation specificity, customer validation, and roadmap credibility. These are the dimensions where a well-prepared briefing team can move the needle regardless of other factors.
CONFERENCE
Product Session / Conference
KPI: Right depth for the room Length: 15-20 min Audience: Mixed: VP of IT and senior architect
Dimension / RationaleCommon Failure ModeWeight
Depth Calibration
The format's primary KPI and primary failure mode. VP of IT and senior architect watching the same demo need different things. The opening 3 minutes must establish business context for the executive while signaling technical substance for the architect. Failure is symmetrical and equally damaging in both directions.
Too deep: VP disconnects at minute 4. Too shallow: architect dismisses the product as marketing by minute 8. Both destroy the session.
25%
Demo Environment Architecture
Attendees are here because they want their hands on the product. The environment must be on-rails but technically realistic and replicable when they leave the room. Cloud container instability, API failures, and demo data that does not reflect real deployment conditions are all failures of this dimension. The Demo Brief for this format requires an Environment Architecture Specification before build.
Live environment fails during delivery. Or: stable demo shows workflows the customer cannot reproduce. Both destroy credibility, one publicly.
20%
Hook Architecture
Conference attendees have a full schedule and no obligation to stay engaged. The hook must earn both tiers within the first 90 seconds. Corporate Visions: adding specific detail increases recall 24% with executive audiences. The executive needs the right depth framed at the right level, not less depth.
Generic problem statement that could open any vendor's session. Room does not engage because there is no signal this session is different.
15%
Narrative Arc
Three distinct moments: the business problem frame the VP recognizes from their Monday morning, the technical proof point the architect can mentally test, and the business outcome the VP can take back to their CFO. Both tiers must leave believing the session was worth their time.
Linear feature tour starting with setup. Business buyer loses the thread before the product earns its place.
15%
Dual Persona Coherence
Different from Depth Calibration. Depth is about altitude — too deep or too shallow overall. Dual Persona Coherence is about whether the demo has deliberately engineered inflection points for both personas. A session that nails one audience and ignores the other has failed half the room.
Technical depth for the architect with no business outcome framing for the VP. One tier leaves satisfied; the other leaves skeptical.
15%
Technical Proof Density
Enough product substance to earn the architect without losing the VP. This session is often the last technical gate before a POC commitment. Captured Simulation: stable enough to maintain pacing, real enough to satisfy technical scrutiny.
Too polished to be credible (architect suspects it cannot do what it showed) or too raw to follow (VP loses the thread).
10%
Workshop Sub-Format In a fully hands-on workshop, Demo Environment Architecture carries 35% (up from 20%) because every participant is simultaneously in the environment and multi-user failure compounds publicly. Depth Calibration drops to 20% as participants self-select depth. Hook Architecture and Narrative Arc each drop to 10%. Dual Persona Coherence and Technical Proof Density hold at 15% and 10%.
The Standard
On-rails but technically realistic. The demo environment must survive hands-on interaction without going off the rails, and the path shown must be reproducible by the customer's own technical team after they leave the room. The Demo Brief exists precisely to prevent this failure before the build starts.
EXECUTIVE
Executive / EBC
KPI: Relationship advancement and deal acceleration Length: Half day to full day Audience: C-suite and VP-level decision makers
Dimension / RationaleCommon Failure ModeWeight
Personalization Depth
ABPM research (11,100+ external customers over two decades): 88% of EBC attendees would recommend a company based on their briefing experience. Not the product. The experience. Preparation takes 4-6 weeks for well-run programs. The difference between an EBC that moves a deal and one that produces polite thank-you notes is almost entirely here.
Generic EBC with customer logo on the welcome screen. Same agenda as the previous twelve visitors. Surface personalization only.
25%
Exclusive Value Architecture
The EBC must deliver something the executive and architect could not access through any other touchpoint. What qualifies: executive-to-executive strategic conversation, demo pre-configured with their actual data, access to the engineer who built the feature, reference customer peer who speaks candidly. What does not qualify: a polished version of the standard SE demo.
Polished version of the standard product demo. Half-day C-suite investment not justified by the content delivered.
20%
Demo Architecture / 3-Layer Balance
Three-layer standard mandatory: current product (full integrity), roadmap (with ship date and named early access customers), vision (explicitly framed as vision). Includes Spontaneous Demonstration Readiness: "let me show you that" is the highest-value moment in an EBC. Requires an environment deep enough to go off-script and a presenter who knows the product at an engineering level.
Figma shown as current product causes permanent relationship-level trust damage. The architect who whispers to the executive in the car determines whether the deal advances.
20%
Executive Alignment
Right people across the table from right people. The architect who accompanies the CIO is often the one making the deployment decision. Focusing only on the most senior person while direct reports feel unseen is a documented EBC failure pattern. Both sides must be represented at the appropriate level.
Vendor brings senior SE to meet customer CIO. Every speaker addresses only the CIO. Direct reports, who make deployment decisions, disengage.
15%
Relationship Arc / Next Steps
The EBC sits in the middle of a relationship arc. Specific committed next steps — named owner, specific date, specific deliverable from both sides — owned by both parties before the executive leaves the building are the bridge between EBC experience and deal advancement. Relationship momentum dissipates within 72 hours without a specific commitment.
Ends with "we'll follow up." No specific commitment, no named owner, no date. The relationship momentum generated in the room dissipates.
15%
Business Context Alignment
Pre-briefing research used visibly throughout. Content reflects genuine understanding of this customer's industry, competitive position, and strategic priorities. Asking basic company questions in the EBC that research should have answered signals the team built the agenda without studying the customer.
Asking in the briefing what pre-briefing research should have answered. "This is public information. Find it."
5%
The Standard
The EBC is not a demo delivery mechanism. The demo is proof material within a larger relationship conversation. The highest-value moments are unscripted: an executive asks an unexpected question and the presenter opens the product right now and shows the answer. That requires environment depth, engineering-level product knowledge, and a demonstration fast enough not to break the strategic conversation's momentum.
CUSTOMER SUCCESS
Customer QBR / Success
KPI: Renewal confidence and expansion momentum Length: 60-90 min Audience: Existing customer, executive sponsors
Dimension / RationaleCommon Failure ModeWeight
ROI Evidence Quality
Gainsight: Conducting QBRs effectively doubles the likelihood of B2B customer renewals. The format's entire credibility rests on proving business outcomes against what the customer stated when they purchased. "Your IT team resolved 847 P1 incidents without human intervention in Q3" is a business result. "Platform usage increased 23% quarter-over-quarter" is an activity metric. Only business results score fully.
Adoption rates and feature utilization when the customer cares about cost reduction and time saved. Activity metrics instead of business outcomes.
30%
Strategic Alignment / Forward Focus
Mural research: 72% of senior executives consider QBRs a waste of time when too tactical, too long, too much information. QBRs that double renewal likelihood are forward-looking strategic conversations. Spend 20% on last quarter. Spend 80% on next two quarters.
Backward status report consuming the majority of meeting time. Executive sends the CSM a note that future QBRs should be delegated down a level.
20%
Adoption Evidence / Peer Benchmarking
The demo shows what the customer has actually built with the product: their data, their workflows, their outcomes. Not generic product capability. Then benchmark their adoption against comparable deployments. The customer identifies their own expansion gap before the vendor names it. A generic product capability demo in a QBR is a sales demo in disguise. Deployed customers know the difference in 60 seconds.
Generic product capability demo rather than the customer's own data. Sales demo in a success context breaks trust immediately.
20%
Roadmap Relevance / Inspiration
Three-layer balance applies, but deployed customers have a sharper eye for vaporware than prospects. Current product with full integrity, near-term roadmap with specific ship dates, vision as vision. The roadmap preview lands in the context of an adoption gap the customer just identified for themselves.
Same roadmap slides from Dreamforce six months ago. Customer knows what shipped and what did not. Trust erodes faster with deployed customers than prospects.
15%
Executive Engagement Level
Gainsight: QBRs achieve full value when executive sponsors from both sides are present. A QBR between a CSM and a middle manager is an account check-in, not a strategic business review. Renewal and expansion decisions live at the level above the person in the room.
Decision-makers absent. CSM-to-middle-manager meeting produces polite summary but no commitment authority. Renewal happens by default not by design.
10%
Next Step Architecture
The quarterly cadence makes a vague close disproportionately damaging. A QBR that ends with "we'll follow up" produces a worse version of itself 90 days later. Every next step needs a named owner, a specific date, and a defined deliverable from both sides.
Ends without specific owned actions. Next QBR starts from zero momentum rather than building on this one.
5%
The Standard
The QBR is not a renewal check-in with a product demo attached. The demo shows what the customer has built, benchmarks it against peers, and makes the expansion conversation feel like the customer's own idea. The vendor who makes the customer the hero of their own story earns the expansion conversation without initiating a sales pitch.

External evidence that
corroborates the weights.

The TDCI dimension weights are practitioner-derived. The following external research independently validates the relative importance of key dimensions across formats. Where research is cited, it corroborates practitioner judgment. Where it is silent, practitioner judgment stands alone and is stated as such.

Gong Labs
SaaS Demo Research (67K-study + 3M+ aggregate)
  • Original study: 67,149 SaaS demo recordings analyzed against CRM win/loss outcomes
  • Discovery Alignment is the single strongest predictor of demo success across all formats studied
  • No winning demo had more than 76 seconds of uninterrupted pitching
  • Winning demos follow the upside-down pyramid: outcome first, setup last
  • Winning reps spend 39% less time on features than average performers
  • 9-minute rule for core product demonstration validated across dataset
  • SE involvement increases enterprise win rates by up to 30% (1.8M opportunities, 2024)
  • 65:35 talk-to-listen ratio in winning demo calls
Applies to: Live SE Demo, Product Session, Technical Deep-Dive
Wistia
State of Video (100M+ Videos)
  • Engagement holds around 50% for the first three minutes then drops steadily
  • 17.3% engagement loss in the opening 2% of a 5-10 minute video
  • Short-form video engagement dropped 10% year-over-year in 2024
  • Educational and instructional content outperforms all other video types at similar lengths
  • Videos on landing pages and galleries see engagement above 40% on average
  • 5-30 minute videos average 38% engagement rate (2025)
Applies to: Video Teaser, Interactive / PLG, Explainer
Corporate Visions
150K+ B2B Buying Decisions
  • Repeating core message 20 times in 20 minutes: 46% recall vs 10% without deliberate repetition
  • Adding specific detail increases recall 24% with executive audiences
  • 53% of lost deals were winnable at the conversation level
  • Dynamic movement and annotation sustain attention vs. static slides
  • Research tested across 60+ academic studies and field trials
Applies to: Keynote, Analyst Briefing, Executive / EBC
ABPM / Electrosonic
20 Years of EBC Research (11,100+ Customers)
  • 88% of EBC attendees would recommend a company based on their briefing experience
  • Executive briefings strengthen relationships and have stronger purchase decision influence
  • EBCs increase deal size and shorten acquisition cycles
  • Preparation for a client visit takes 4-6 weeks for well-run programs
  • Personalization is consistently the primary value driver cited
Applies to: Executive / EBC
Gainsight / Mural
QBR Effectiveness Research
  • Gainsight: Effective QBRs double the likelihood of B2B customer renewals
  • Mural: 72% of senior executives consider QBRs a waste of time when they are too tactical
  • ROI evidence drives renewal; adoption metrics do not
  • Forward-looking QBRs outperform backward-looking reviews on all retention metrics
  • Executive sponsors present correlates with significantly higher renewal outcomes
Applies to: Customer QBR / Success
Peter Cohan (The Second Derivative) / PreSales Collective
Great Demo!, 3rd Ed. + State of PreSales 2025
  • Great Demo! is the only demo methodology validated across thousands of enterprise demos
  • "Do the Last Thing First" independently validated by Gong data at 3M+ demo scale
  • Situation Slide (2-min discovery summary) aligns exactly with Gong's opening phase findings
  • SE teams now spend approximately 3 hours per live demo, up 20% since 2022
  • Demo automation market: $500M in 2025, 25% CAGR. Tools focus on delivery, not coherence scoring.
Applies to: Live SE Demo, Product Session, all formats
On the relationship between research and practitioner judgment External research corroborates dimension weights but does not prescribe them. No dataset measures enterprise technical marketing demos across all eight TDCI formats. Gong measures SaaS discovery demos. Wistia measures digital video engagement. ABPM measures EBC programs. The translation from "this factor correlates with success in this dataset" to "this dimension weight is defensible for this format" is practitioner judgment. Where the research is strong, it validates. Where it is silent, twelve years of direct experience is the source. Section 7.2 of the downloadable TDCI framework PDF states this explicitly and will be preserved in v1.1 of that document. It is a feature, not a limitation.
Technical Integrity Standard

TMMs own every claim.
All eight formats.

Technical marketers are accountable for what they demonstrate in a way brand teams are not. The analyst who was briefed on a capability will ask about it in the next inquiry. The architect who saw a feature demoed will request it in the POC. The executive who was shown an AI workflow will tell their board they saw it working. Technical Integrity is a professional accountability standard, not a style preference.

The Production Method Standard
Four tiers. One consistent principle.
Every demo has a production method. The production method sends a signal about the product's current state. The TDCI scores whether that signal is honest.
Full Credit
Live product or captured simulation of shipped functionality. The audience can access, deploy, and verify what they saw today.
Partial Credit
Prototype of functionality committed to ship within 2 quarters, with explicit disclosure: "This is our Q2 roadmap, currently in beta with three customers." The disclosure is the credential.
Hard Penalty
Figma or prototype of roadmap items with no ship date, no customer validation, and no disclosure. Capped at 40% of Technical Integrity dimension maximum in SE / Analyst / EBC contexts.
Automatic Fail
Figma in a PLG sandbox context or as current product in an analyst briefing demo. Any context where the audience will attempt to reproduce what they saw and cannot.
The Three-Layer Balance
Current. Roadmap. Vision. All three. All labeled.
Enterprise vendors in an AI arms race are blurring the layers. The result is credibility debt that compounds across buying cycles. The standard is not about limiting what you show. It is about labeling it honestly.
Current
Shipped and deployable today. Shown with full Technical Integrity. Live product or captured simulation. The foundation every other layer rests on.
Roadmap
Committed near-term delivery with a specific ship date and named early access customers. Shown as roadmap. The audience hears commitment. The architect hears honesty. Both can work with that.
Vision
Longer-horizon direction, shown explicitly as vision. Acceptable in keynote and EBC contexts with honest framing. Not acceptable in analyst demo or SE discovery contexts.
The Failure Mode
The AI arms race has made vision demos the default at major enterprise events. Customers and analysts are becoming calibrated to this. The practitioner who shows only what ships, labels what does not, and names the date builds trust that compounds. The practitioner who shows everything without disclosure creates debt that compounds instead.
Scoring Philosophy
A score of 50 to 65 is mediocre. The TDCI does not soften this. The purpose of a score is to identify where improvement is required, not to provide encouragement.

Built from
twelve years
of the work.

The TDCI is not a framework built by analysts observing the discipline from outside. It reflects what practitioners who have built keynotes, briefed analysts, and scaffolded SE enablement programs actually know about what makes a demo work.

It is a practitioner standard written by a practitioner. The judgment that produced the failure mode taxonomy, the dimension weights, and the scoring philosophy came from building the work at scale for enterprise software audiences.

12+
Years enterprise software technical marketing
3
Consecutive Knowledge mainstage keynote architectures
F500
Buyers including CERN, Disney, USAF, USAA, Stellantis
MQ+
Gartner MQ and Forrester Wave analyst relations cycles
Senior Technical Marketing Manager. Twelve-plus years at ServiceNow spanning hands-on technical delivery through senior IT strategy leadership.
Architect-level keynote construction for enterprise audiences at scale. Knowledge conference mainstage for three consecutive years.
Analyst relations across Gartner Magic Quadrant and Forrester Wave evaluation cycles. The TDCI Analyst Briefing category was built from direct experience in those rooms.
SE enablement scaffolding for global field teams. The Live SE Demo category failure modes are drawn from observing hundreds of demos in the field.
U.S. Navy veteran. MBA, University of Arizona Global Campus. BA in Political Science and Communications, University of San Diego.
More about Chad

Attribution and Use The ThinkRoot Demo Coherence Index (TDCI) is an original framework developed by Chad Corriveau, Founder of ThinkRoot, published under the ThinkRoot Demo Intelligence platform. The TDCI may be referenced, cited, and applied by practitioners in their own work with attribution to ThinkRoot (thinkroot.io). Commercial reproduction, resale, or incorporation into competing scoring tools or platforms without written permission is prohibited. Dimension weights are practitioner-derived and reflect accumulated judgment from direct experience. They are not derived from statistically validated empirical research and are not presented as such. See Section 7.2 of the downloadable TDCI framework PDF for the full statement on methodology and limitations.

TDCI Version 1.1

Version 1.1 — March 2026. Added full dimension tables for all eight formats, external research citations, Technical Integrity standard, and production method principles. Dimension weights are practitioner-derived and flagged for revision as real-world scoring data from the Demo Coherence Scorer accumulates. Version 2.0 target: incorporate aggregate scoring patterns from the ThinkRoot Demo Intelligence tool, add the Competitive Benchmark Layer (side-by-side TDCI scoring across matched demo corpora), and refine weights where observed scoring data diverges from practitioner judgment. Version history and change rationale will be published transparently with each release.

Next Step

Download the framework.
Then score a real demo.

The TDCI document is the standard. The Demo Coherence Scorer is the tool that applies it. Category-aware scoring, six production methods, three output tabs. Free for public demos.

Download the TDCI Framework Score a Demo Free