Key 01
Readiness score
69/100

Complete the planning task first on this ai sales coaching page: input your coaching baseline and get a structured AI sales coaching rollout draft. Then switch to the report layer to validate evidence quality, boundaries, and risk before scaling.
Tool-first workflow for AI sales coaching decisions: input team baseline, generate readiness and ROI guidance, then validate boundaries, evidence quality, and risks before scaling.
Results include recommendation, KPI changes, uncertainty, boundaries, and next actions.
Review key numbers, recommendation rationale, and fit boundaries before deciding your rollout path.
Preview mode: summary cards below use the default baseline scenario. Run the tool above to switch to your generated numbers.
Key 01
69/100
Key 02
+8.4 pct
Key 03
$4,193,437
Key 04
73/100 (+/-18%)
| Conclusion | Boundary | Sources | Status |
|---|---|---|---|
| AI adoption is mainstream, but execution intensity is uneven and often shallow. | Do not treat experimentation as readiness; track weekly active usage, AI-assisted work-hour share, and cross-system integration. | S1,S2,S6 | Verified |
| Coaching and performance workflows combined with gen AI correlate with stronger market-share outcomes. | This is correlation, not guaranteed causality; require pilot control groups before budget expansion. | S4 | Partial |
| Training programs have a visible cost floor that must be modeled before AI ROI claims. | If spend baseline is missing, net-impact estimates should be treated as directional only. | S3 | Verified |
| Workforce-facing deployments require jurisdiction-level controls, not a single global policy. | EU timeline controls, NYC bias-audit/notice obligations, and ADA accommodation paths should be designed before scale. | S7,S8,S9,S13 | Verified |
| More precise AI recommendations do not automatically produce better coaching outcomes. | Field-test feedback granularity by rep seniority and keep manager mediation in the loop. | S5,S14 | Partial |
| 12-month retention uplift from AI-powered coaching programs remains unproven in public data. | Mark as pending confirmation and require 6-12 month cohort validation before annual lock-in. | S5,S14,S15 | Pending |
Transparent assumptions, source registry, and known/unknown list prevent overconfident planning.
| Gap | Why it matters | Stage1b update | Status |
|---|---|---|---|
| Source registry had stale links and weak freshness metadata | Broken or undated sources reduce auditability and make leadership sign-off harder. | Rebuilt the registry with accessible, dated references (S1-S15), including refreshed ATD URL and explicit survey scope. | Closed |
| Risk section under-covered US employment AI obligations | Performance tracking can become employment decision input, creating legal exposure if audit and accommodation paths are missing. | Added NYC LL144 and ADA obligations with concrete triggers, and tied them to boundary/risk tables. | Closed |
| Adoption breadth was conflated with true execution depth | High headline adoption can still hide low weekly usage intensity, causing ROI over-forecast. | Added NBER intensity data (weekly usage + work-hour share) and required active-usage checks before scale decisions. | Closed |
| Counterexamples on AI coaching recommendation quality were thin | Without counterexamples, teams may assume โmore precise AI suggestionsโ always improves rep outcomes. | Added peer-reviewed evidence showing over-precise AI recommendations can hurt self-efficacy without manager mediation. | Closed |
| Long-term causal evidence on sales-training retention is limited | Budget lock-ins may assume persistent uplift without public RCT support. | Explicitly marked as pending confirmation and required 6-12 month cohort validation before annual lock-in. | Pending |
| Assumption | Default | Why | Update trigger |
|---|---|---|---|
| Ramp gain conversion coefficient | 0.36 | Avoids over-crediting short-term onboarding gains. | Replace with cohort data when available. |
| Manager capacity baseline | 8 hours/week | Coaching execution is the behavior-change bottleneck. | Recalibrate if manager-to-rep ratio shifts >20%. |
| Compliance penalty | 4-6 points | Reflects legal review latency and rollout constraints. | Lower only after legal SLA is proven stable. |
| Concept | What it includes | What it is not | Minimum condition | Failure signal |
|---|---|---|---|---|
| AI coaching and performance tracking | Adjusts drills by role, region, and behavior signals. | One-size-fits-all script generation. | Needs clean CRM stages + coaching feedback loops. | Advice quality converges to generic templates after week 2. |
| AI automation | Speeds note taking, summaries, and follow-up drafts. | Does not by itself improve rep skill progression. | Track if saved time is reinvested in coaching. | Admin workload drops but win-rate and ramp stay flat. |
| AI coaching recommendation | Prioritizes next-best coaching actions with confidence tags. | Fully autonomous performance evaluation. | Needs manager calibration cadence and documented overrides. | Manager disagreement rises for three consecutive cycles. |
| AI performance scoring in employment context | Flags coaching-risk patterns and routes high-impact decisions to human review. | Sole basis for promotion, compensation, or disciplinary actions. | Requires bias audit cadence, accommodation path, and override logging. | No annual audit evidence or no documented appeal channel for impacted employees. |
| Autonomous coaching agent | Can orchestrate prompts and sequencing with minimal supervision. | Not suitable as default in high-compliance environments. | Requires explicit legal gates, audit logs, and fallback controls. | Unable to provide traceable rationale for high-impact feedback. |
| ID | Source | Key data | Published | Checked |
|---|---|---|---|---|
| S1 | Salesforce: State of Sales 2026 landing page | Salesforce State of Sales 2026 page states that nine in ten sales teams use agents or expect to within two years, and highlights 94% leader agreement that agents are essential to growth. | 2026-01 | 2026-03-04 |
| S2 | Salesforce State of Sales Report 2026 (PDF) | The report PDF (updated 2026-01-27) highlights agent and AI execution constraints, including that 51% of sales leaders report tech silos hinder AI impact. | 2026-01-27 | 2026-03-04 |
| S3 | ATD 2023 State of Sales Training | Median annual sales training spend was USD 1,000-1,499 per seller; sales kickoff adds another USD 1,000-1,499. | 2023-07-05 | 2026-03-04 |
| S4 | McKinsey: State of AI in B2B Sales and Marketing | Nearly 4,000 decision makers surveyed: companies combining advanced commercial personalization with gen AI are 1.7x more likely to increase market share. | 2024-09-12 | 2026-03-04 |
| S5 | NBER Working Paper 31161 | Study of 5,179 support agents: generative AI increased productivity by 14% on average, with 34% gains for novice and low-skilled workers. | 2023-04 (rev. 2023-11) | 2026-03-04 |
| S6 | NBER Working Paper 32966 | Nationally representative 2024-2025 surveys show rapid adoption (39.4% adults used gen AI), but work-hour intensity remains concentrated at roughly 1-5%. | 2024-08 (rev. 2025-08-26) | 2026-03-04 |
| S7 | European Commission: EU AI Act | AI Act entered into force on 2024-08-01; prohibited practices applied from 2025-02-02, GPAI obligations from 2025-08-02, and high-risk obligations from 2026-08-02. | 2024-08-01 (timeline checked 2026-02-18) | 2026-03-04 |
| S8 | NYC DCWP: Automated Employment Decision Tools | Employers must complete an independent bias audit within one year before using an AEDT and provide candidate/employee notice at least 10 business days in advance. | 2023-07-05 | 2026-03-04 |
| S9 | ADA.gov: AI guidance for disability rights | Employers remain responsible for ADA compliance when using AI tools and must provide reasonable accommodation plus alternatives where AI may screen out people with disabilities. | 2024-05-16 | 2026-03-04 |
| S10 | NIST AI RMF Playbook | Playbook keeps govern-map-measure-manage implementation patterns and notes AI RMF 1.0 is being revised; update plans should avoid hard-coding stale controls. | 2023-01 (revision note checked 2025-11-20) | 2026-03-04 |
| S11 | NIST AI 600-1 (Generative AI Profile) | Published in July 2024 to extend AI RMF with GenAI-specific guidance across content provenance, misuse monitoring, and model risk controls. | 2024-07 | 2026-03-04 |
| S12 | ISO/IEC 42001:2023 AI management systems | First certifiable international AI management system standard, published in December 2023. | 2023-12 | 2026-03-04 |
| S13 | EUR-Lex: GDPR Article 22 | Individuals have the right not to be subject to decisions based solely on automated processing with legal or similarly significant effects. | 2016-04-27 | 2026-03-04 |
| S14 | Journal of Business Research (2025): AI precision in coaching | Two studies (N=244, N=310) found that highly precise AI recommendations can lower salespeople self-efficacy and degrade coaching outcomes without manager mediation. | 2025-05 | 2026-03-04 |
| S15 | NBER Working Paper 34174 | An estimated 25%-40% of workers in the US and Europe are in jobs where retraining for AI-supported software development tasks can improve productivity. | 2025-09 | 2026-03-04 |
| Topic | Status | Impact | Minimum action |
|---|---|---|---|
| 12-month retention uplift from AI-powered coaching programs | Pending | No reliable public RCT was found for this exact scenario; annual ROI can be overstated. | Mark as pending confirmation and run 6-12 month cohort validation before annual budget lock-in. |
| Cross-jurisdiction employment AI obligations | Partial | EU, NYC, and disability-rights obligations differ by trigger and timeline, which can delay global rollout if treated as one policy. | Maintain jurisdiction-level control matrices and refresh legal checkpoints quarterly. |
| Manager scoring consistency across cohorts | Known | Inconsistent scorecards reduce trust in AI recommendations. | Keep biweekly calibration and archive override logs for auditability. |
| Recommendation granularity by rep seniority | Partial | Overly precise AI recommendations can reduce self-efficacy for certain seller cohorts and weaken outcomes. | A/B test feedback granularity and require manager-mediated coaching for low-confidence cohorts. |
| Usage intensity to KPI elasticity | Partial | Fast adoption headlines may still map to small AI-assisted work-hour share, creating inflated short-term ROI expectations. | Set scale gates on weekly active usage and AI-assisted hours before extrapolating quota lift. |
Use structured comparisons and risk controls to make practical rollout choices.
| Dimension | Manual training | AI generic | Hybrid planner | Autonomous agent |
|---|---|---|---|---|
| Time-to-value | Slow (8-16 weeks) | Medium (4-8 weeks) | Medium-fast (3-6 weeks) | Fast setup, volatile outcomes |
| Data prerequisites | Low; relies on human notes | CRM baseline + prompt templates | CRM + conversation + manager feedback loops | Full signal stack + strict data governance |
| Governance load | Low | Medium | Medium-high with explicit controls | High |
| Evidence strength | Operational history, low transferability | Vendor evidence, mixed rigor | Cross-source + pilot validation required | Limited public evidence in sales-training context |
| Typical failure mode | Manager capacity bottleneck | Template drift and low adoption | Calibration not maintained after pilot | Compliance and explainability breakdown |
| Best-fit condition | Small teams with senior coaches | Need fast enablement with low setup cost | Need measurable uplift with controlled risk | Only with mature governance and legal approvals |
| Risk | Trigger | Business impact | Tradeoff | Minimum mitigation | Source + date |
|---|---|---|---|---|---|
| EU compliance deadline missed | EU-facing rollout without controls for the 2025-02-02, 2025-08-02, and 2026-08-02 milestones. | Launch delay, legal exposure, and forced feature rollback. | Faster launch vs regulatory certainty. | Map controls to EU AI Act timeline and keep jurisdiction-level legal sign-off gates. | S7 (timeline checked 2026-02-18) |
| Employment-decision challenge from workers | Promotion, compensation, or disciplinary outcomes are tied to AI scores without audit, notice, or accommodation channels. | Program trust drops, complaints rise, and regional deployment can be blocked by regulators or works councils. | Automation efficiency vs legal defensibility. | Require annual bias audits, 10-business-day notice, accommodation workflow, and documented human appeal paths. | S8,S9,S13 |
| Data quality debt masks true coaching impact | Revenue systems are disconnected and frontline data cleaning is delayed. | Confidence score inflates while real behavior change stalls. | Speed of rollout vs reliability of metrics. | Gate scale decisions on data hygiene KPIs and calibration pass rates. | S1,S10 (rev. note 2025-11-20) |
| Manager adoption fatigue | Calibration sessions or manager-mediated coaching loops are skipped for multiple cycles. | AI suggestions drift from frontline reality and over-precise feedback can reduce seller confidence. | Lower management overhead vs sustained coaching quality. | Protect manager coaching capacity and tie calibration completion to operating reviews. | S1,S3,S14 |
| Adoption-intensity mismatch | Leadership extrapolates annual quota uplift before weekly active usage and AI-assisted hours clear minimum thresholds. | Forecast bias, budget misallocation, and rollout fatigue after early optimism. | Fast narrative wins vs measurable execution depth. | Set hard gates on weekly active usage and AI-assisted work-hour share before scaling ROI assumptions. | S6 |
| Over-claiming long-term ROI without public causal evidence | Annual budget is locked based on short pilot uplifts only. | Forecast bias and painful rollback if uplift decays after quarter two. | Aggressive scaling narrative vs defensible financial planning. | Label as pending and require 6-12 month cohort evidence before full lock-in. | S5,S14,S15 |
| Scenario | Assumptions | Process | Expected outcome | Counterexample / limit |
|---|---|---|---|---|
| Enterprise onboarding acceleration | 80 reps, weekly coaching, medium compliance. | Run six-week pilot across two cohorts. | Ramp reduction 2.5-4.5 weeks with confidence ~75. | If manager calibration drops below 80% completion for two cycles, projected gains usually do not hold. |
| Regulated mid-market pilot | 32 reps, high compliance, partial taxonomy. | Restrict automated coaching recommendations to legal-approved script domains. | Pilot recommendation with controlled ROI and lower risk. | If region-specific consent controls are absent, rollout should pause even when pilot KPIs look positive. |
| Resource-constrained team | 20 reps, monthly coaching, CRM-only signals. | Run 30-day stabilization sprint before pilot. | Stabilize tier until readiness and confidence improve. | If data quality and taxonomy stay unchanged, automation may increase activity but not quota attainment. |
Stage1c gate snapshot with explicit blocker/high thresholds and tracked medium/low backlog items.
blocker
0
high
0
medium
1
low
0
Gate status: PASS (stage1c, blocker=0, high=0)
Audit snapshot refreshed on 2026-03-04. Pending evidence is explicitly labeled and gated from scale decisions.
| Gap | Why it matters | Update | Status |
|---|---|---|---|
| Source registry had stale links and weak freshness metadata | Broken or undated sources reduce auditability and make leadership sign-off harder. | Rebuilt the registry with accessible, dated references (S1-S15), including refreshed ATD URL and explicit survey scope. | Closed |
| Risk section under-covered US employment AI obligations | Performance tracking can become employment decision input, creating legal exposure if audit and accommodation paths are missing. | Added NYC LL144 and ADA obligations with concrete triggers, and tied them to boundary/risk tables. | Closed |
| Adoption breadth was conflated with true execution depth | High headline adoption can still hide low weekly usage intensity, causing ROI over-forecast. | Added NBER intensity data (weekly usage + work-hour share) and required active-usage checks before scale decisions. | Closed |
| Counterexamples on AI coaching recommendation quality were thin | Without counterexamples, teams may assume โmore precise AI suggestionsโ always improves rep outcomes. | Added peer-reviewed evidence showing over-precise AI recommendations can hurt self-efficacy without manager mediation. | Closed |
| Long-term causal evidence on sales-training retention is limited | Budget lock-ins may assume persistent uplift without public RCT support. | Explicitly marked as pending confirmation and required 6-12 month cohort validation before annual lock-in. | Pending |
Grouped FAQ supports decision intent, then hands off to actionable next paths.
Design structured coaching loops and role-based enablement plans.
Build role-play drills and skill scorecards for frontline reps.
Evaluate rep capability and prioritize coaching actions.
Use tool outputs for immediate execution and keep report evidence in decision memos for auditability.
This round audits evidence gaps on the existing page and adds source-verifiable increments with explicit dates. Where public evidence remains weak, we keep the item in pending status instead of forcing certainty.
8
1
R1-R15
New facts span 2023-04-25 to 2025-11-19; regulatory timeline pages and all links were rechecked on 2026-03-04.
| Gap | Risk | Stage1b increment | Sources | Status |
|---|---|---|---|---|
| US compliance timeline for Colorado AI obligations was treated as static (Feb 1, 2026). | Rollout calendars can misfire if legal obligations shift and release gates are not updated. | Added SB25B-004 enacted fact: SB24-205 requirements were extended to June 30, 2026 (approved 2025-08-28; effective 2025-11-25). | R1,R2 | Closed |
| Decision memo did not capture failed alternative timeline proposals. | Teams may assume future delays are guaranteed and under-invest in near-term controls. | Added HB25B-1009 status: proposal to move date to 2027-08-01 was postponed indefinitely (8-4 vote) and marked Lost. | R3 | Closed |
| Adoption breadth was still used as proxy for realized business lift. | Teams can over-forecast quota and lock annual spend before usage intensity proves durable. | Added NBER w32966 usage-intensity baseline: 23% weekly work use, 9% daily use, 1-5% AI-assisted work hours, and 1.4% reported time savings. | R9 | Closed |
| Productivity evidence lacked transfer conditions from support functions to quota-carrying sales roles. | Leaders may directly apply generic AI uplift to revenue plans and miss role-specific variance. | Added NBER w31161 heterogeneity evidence (+14% average, +34% for novice workers) and explicitly marked transfer-to-sales as conditional, not automatic. | R8 | Closed |
| Narrative assumed time saved automatically changes work mix and pipeline quality. | Pilot teams may pass/fail programs on vanity efficiency metrics instead of deal outcomes. | Added NBER w33795 field experiment result: treated users spent ~2 fewer email hours per week, but researchers found no detectable shift in task quantity/composition. | R10 | Closed |
| Enforcement language around AI-assisted personnel decisions was too abstract. | Leaders may rely on vendor positioning and miss concrete legal exposure under existing law. | Added EEOC + DOL hooks: Title VII applies to automated selection decisions, four-fifths rule is not a safe harbor, and significant employment decisions require meaningful human oversight. | R11,R12 | Closed |
| GenAI control discussion stayed conceptual without concrete risk-control catalogs. | Security programs may skip model-specific threats (hallucination, malicious training data, misuse). | Added NIST final guidance references: AI 600-1 lists 12 risk classes with 200+ actions, and SP 800-218A extends SSDF for generative AI and dual-use foundation models. | R13 | Closed |
| EU timeline assumptions lacked explicit handling for standards/support-tool dependencies. | Cross-region launches can either over-delay revenue or ship before obligations actually apply. | Added Commission clarification: employment/worker-management use cases are high-risk; Nov 19, 2025 simplification proposal ties application to standards/support tools with up to 16 months compliance window (still under EU legislative negotiation). | R14,R15 | Closed |
| Long-horizon (6-12 month) public benchmark for AI sales-coaching retention remains weak. | Annual ROI lock-in can overstate durability and create expensive rollback risk. | Explicitly kept as pending: do not hard-lock annual budgets without internal cohort validation. | No reliable public benchmark yet | Pending confirmation / no reliable public data |
| ID | Source | Fact added | Published | Checked |
|---|---|---|---|---|
| R1 | Colorado General Assembly: SB24-205 Consumer Protections for Artificial Intelligence Open source | Bill summary states developer/deployer obligations apply on and after 2026-02-01 for high-risk AI systems. | 2024-05-17 | 2026-03-04 |
| R2 | Colorado General Assembly: SB25B-004 Increase Transparency for Algorithmic Systems Open source | Enacted summary says requirements in SB24-205 are extended to 2026-06-30; approved 2025-08-28 and effective 2025-11-25. | 2025-08-28 | 2026-03-04 |
| R3 | Colorado General Assembly: HB25B-1009 Artificial Intelligence Systems Open source | The bill proposed moving effective date to 2027-08-01, but committee action postponed it indefinitely (8-4) and status is Lost. | 2025-08-21 | 2026-03-04 |
| R4 | OECD (2025-12-19): How widespread is algorithmic management in workplaces? Open source | Based on over 6,000 managers across six countries: US adoption reaches 90%; nearly two-thirds of users report concerns, with accountability and explainability among top issues. | 2025-12-19 | 2026-03-04 |
| R5 | FTC press release on DOJ/CFPB/EEOC/FTC joint AI statement Open source | Joint stance (2023-04-25): no AI exemption from existing law; agencies will vigorously enforce authorities against automated-system harms. | 2023-04-25 | 2026-03-04 |
| R6 | UK ICO: Automated decision-making and profiling guidance Open source | Guidance banner states it is under review because the Data (Use and Access) Act came into law on 2025-06-19. | 2025-06-19 (law date) | 2026-03-04 |
| R7 | UK ICO: DUAA summary of data protection changes Open source | ICO summary says automated decision provisions now allow wider use of solely automated significant decisions with safeguards, and remove pre-DUAA restrictions in many cases. | 2025 | 2026-03-04 |
| R8 | NBER Working Paper 31161: Generative AI at Work Open source | Field data from 5,179 customer-support agents: AI assistant raised productivity by 14% on average, with a 34% gain for novice/low-skilled workers. | 2023-04 (rev. 2023-11) | 2026-03-04 |
| R9 | NBER Working Paper 32966: The Rapid Adoption of Generative AI Open source | US survey evidence: 23% of employed respondents used GenAI at work in the prior week (9% daily), while only 1-5% of total work hours were AI-assisted and reported time savings equaled 1.4% of work hours. | 2024-09 (rev. 2025-02) | 2026-03-04 |
| R10 | NBER Working Paper 33795: Shifting Work Patterns with Generative AI Open source | Randomized field experiment across 66 firms and 7,137 workers: active treated users spent ~2 fewer email hours weekly, but no detectable shift in task quantity/composition. | 2025-05 (rev. 2025-11) | 2026-03-04 |
| R11 | EEOC FY2023 report: AI, Title VII, and adverse-impact checks Open source | EEOC states Title VII applies to employersโ automated selection decisions and clarifies that passing the four-fifths rule does not guarantee no disparate impact finding. | FY2023 report (released 2024-03-11) | 2026-03-04 |
| R12 | U.S. Department of Labor: AI Best Practices roadmap Open source | The Oct 16, 2024 release explicitly recommends meaningful human oversight for significant employment decisions and transparency to workers. | 2024-10-16 | 2026-03-04 |
| R13 | NIST + U.S. Commerce AI guidance package (AI 600-1, SP 800-218A) Open source | NIST states AI 600-1 centers on 12 risk areas with just over 200 mitigation actions; SP 800-218A adds secure-development practices for generative AI and dual-use foundation models. | 2024-07-26 | 2026-03-04 |
| R14 | European Commission AI Act policy page Open source | The AI Act page classifies employment and workers-management AI as high-risk and lists phased obligations (Aug 2026/Aug 2027 for high-risk scopes). | AI Act in force 2024-08-01 (timeline checked 2026-03-04) | 2026-03-04 |
| R15 | European Commission digital omnibus news update Open source | News article (2025-11-19) says high-risk AI rules would apply once standards/support tools are available, giving companies up to 16 months to comply; proposal remains under Parliament/Council negotiation. | 2025-11-19 | 2026-03-04 |
| Common claim | Counterexample | Decision action | Sources |
|---|---|---|---|
| โAdoption is high, so governance risk is low.โ | OECD shows US adoption at 90%, but nearly two-thirds of users still report trustworthiness concerns. | Pair adoption KPI with accountability/explainability incident tracking before scale approval. | R4 |
| โWeekly AI usage means durable productivity gains are already locked.โ | NBER shows adoption can be broad while intensity remains shallow: only 1-5% of work hours are AI-assisted and reported savings are 1.4% of work hours. | Require usage-intensity floor plus outcome metrics (stage conversion, deal velocity) before annual budget lock-in. | R9 |
| โTime saved by copilots automatically improves pipeline quality.โ | In a randomized experiment, treated workers saved about two email hours per week but task quantity/composition did not shift detectably. | Track where saved hours are reinvested (deal prep, objection coaching, manager reviews) instead of assuming revenue impact. | R10 |
| โRegulatory dates are fixed once we set rollout plan.โ | Colorado timeline changed from 2026-02-01 baseline (SB24-205) to 2026-06-30 via enacted SB25B-004; another delay proposal (HB25B-1009) failed. | Use quarterly legal refresh checkpoint rather than one-time timeline assumptions. | R1,R2,R3 |
| โVendor compliance statement is enough legal shield.โ | US agencies explicitly state there is no AI exemption to existing law and commit to active enforcement. | Keep internal accountability map and legal sign-off on high-impact coaching uses. | R5 |
| โPassing the four-fifths rule means AI-enabled selection is legally safe.โ | EEOC states Title VII still applies to automated selection, and four-fifths compliance alone does not guarantee no disparate-impact finding. | Add adverse-impact analysis cadence, legal review, and meaningful human oversight for significant decisions. | R11,R12 |
| โUK automated-decision rules are stable, copy once globally.โ | ICO states guidance is under review post-DUAA and law now expands scenarios where solely automated decisions may be used with safeguards. | Maintain UK-specific overlay in policy pack and revalidate with each guidance update. | R6,R7 |
| โEU high-risk obligations only follow a fixed date, regardless of implementation readiness.โ | Commission updates now pair implementation with standards/support tools and mention up to 16 months for compliance under the simplification proposal. | Use dual-track planning: baseline legal date + standards/support-tool readiness checkpoint. | R14,R15 |
| Decision question | Boundary | Applicability condition | Tradeoff | Sources |
|---|---|---|---|---|
| Can generic AI productivity lift be directly mapped to quota-carrying sales teams? | No. Transfer from support-function evidence to sales outcomes is conditional. | Require role-segmented holdout tests (SDR/AE/AM) with win-rate, cycle-length, and manager mediation metrics. | Slower scale pace but materially lower forecast-error risk. | R8,R10 |
| Is adoption and time saved enough for annual ROI lock-in? | No. Efficiency proxies are insufficient without revenue-quality confirmation. | Combine usage intensity (work-hour share) with stage-conversion and no-decision/rollback rates. | Higher instrumentation effort vs lower false-positive scale approvals. | R9,R10 |
| Can AI coaching score be used directly for compensation/promotion? | Treat this as a significant employment decision context with mandatory human oversight. | Apply adverse-impact testing cadence and preserve legal-review trail beyond four-fifths screening. | Decision speed drops, while legal defensibility and employee-trust quality improve. | R11,R12 |
| Can one policy timeline cover US + UK + EU rollout without legal drift? | No. Use jurisdiction overlays and keep timeline assumptions versioned. | Refresh each quarter using enacted law status plus standards/support-tool readiness updates. | Higher governance overhead vs lower risk of launch-window misalignment. | R1,R2,R3,R6,R7,R14,R15 |
| Risk | Decision impact | Minimum mitigation | Evidence |
|---|---|---|---|
| Regulatory timeline drift in active rollout quarter | Release checklist can become stale; control milestones miss legal cutoff. | Add legal timeline owner + quarterly checkpoint + release-gate dependency. | R1,R2,R3,R14,R15 |
| Title VII / worker-rights exposure when coaching signals feed personnel decisions | Discrimination disputes and enforcement exposure rise if systems are treated as self-justifying. | Document decision ownership, run adverse-impact checks, and retain meaningful human-oversight records. | R5,R11,R12 |
| ROI overprojection from adoption headlines | Budget lock-ins may happen before sufficient usage intensity and outcome quality are proven. | Gate scale decisions on intensity + outcome metrics, not adoption alone. | R8,R9,R10 |
| GenAI-specific security failure in coaching workflows | Model misuse, malicious data, or hallucinated recommendations can leak into live coaching and damage trust. | Map controls to NIST AI 600-1 + SP 800-218A and enforce pre-production threat review. | R13 |
| Over-claiming long-term ROI from short pilot windows | Annual budget lock-in can amplify reversal cost if gains do not persist. | Keep as pending and require 6-12 month cohort confirmation before full lock-in. | Pending: no reliable public benchmark |
These items remain open by design. Keep them out of annual lock-in and external ROI promises until internal evidence is complete.
| Pending topic | Impact | Minimum path |
|---|---|---|
| 12-month retention benchmark for AI sales coaching by role (SDR/AE/AM) | Without durable benchmark, leadership may treat short-term uplift as structural gain. | Run internal cohort tracking with holdout design before annualized ROI commitments. |
| Role-level transfer coefficient from support-agent productivity to quota outcomes | Without transfer coefficients, revenue planning can embed unsupported uplift assumptions. | Design SDR/AE/AM parallel pilots with common holdout windows and publish role-specific conversion deltas. |
| Public benchmark for minimum manager mediation cadence | Either over-automation or over-staffing risk if cadence is set by intuition. | Track mediation frequency against override drift and outcome stability per cohort. |
| Final legal outcome of the EU digital-omnibus AI timeline proposal | Negotiation outcome may shift launch windows and compliance preparation sequence in EU operations. | Keep this item in legal-watch status and refresh release gates each quarter until Parliament/Council adoption is final. |