Key 01
Readiness score
69/100

Tool-first workflow: quantify sales coaching software capabilities, generate readiness and action paths, then validate evidence, boundaries, and risk before scale. All baseline fields are required except Constraints.
Input guardrails before generation
Results include recommendation, KPI changes, uncertainty, boundaries, and next actions.
Review key numbers, recommendation rationale, and fit boundaries before deciding your rollout path.
Preview mode: summary cards below use the default baseline scenario. Run the tool above to switch to your generated numbers.
Key 01
69/100
Key 02
+8.4 pct
Key 03
$4,193,437
Key 04
73/100 (+/-18%)
| Conclusion | Boundary | Sources | Status |
|---|---|---|---|
| AI adoption is mainstream, but execution intensity is uneven and often shallow. | Do not treat experimentation as readiness; track weekly active usage, AI-assisted work-hour share, and cross-system integration. | S1,S2,S6 | Verified |
| Coaching and performance workflows combined with gen AI correlate with stronger market-share outcomes. | This is correlation, not guaranteed causality; require pilot control groups before budget expansion. | S4 | Partial |
| Training programs have a visible cost floor that must be modeled before AI ROI claims. | If spend baseline is missing, net-impact estimates should be treated as directional only. | S3 | Verified |
| Workforce-facing deployments require jurisdiction-level controls, not a single global policy. | EU timeline controls, NYC bias-audit/notice obligations, and ADA accommodation paths should be designed before scale. | S7,S8,S9,S13 | Verified |
| More precise AI recommendations do not automatically produce better coaching outcomes. | Field-test feedback granularity by rep seniority and keep manager mediation in the loop. | S5,S14 | Partial |
| 12-month retention uplift from AI-powered coaching programs remains unproven in public data. | Mark as pending confirmation and require 6-12 month cohort validation before annual lock-in. | S5,S14,S15 | Pending |
Transparent assumptions, source registry, and known/unknown list prevent overconfident planning.
| Gap | Why it matters | Stage1b update | Status |
|---|---|---|---|
| Source registry had stale links and weak freshness metadata | Broken or undated sources reduce auditability and make leadership sign-off harder. | Rebuilt the registry with accessible, dated references (S1-S15), including refreshed ATD URL and explicit survey scope. | Closed |
| Risk section under-covered US employment AI obligations | Performance tracking can become employment decision input, creating legal exposure if audit and accommodation paths are missing. | Added NYC LL144 and ADA obligations with concrete triggers, and tied them to boundary/risk tables. | Closed |
| Adoption breadth was conflated with true execution depth | High headline adoption can still hide low weekly usage intensity, causing ROI over-forecast. | Added NBER intensity data (weekly usage + work-hour share) and required active-usage checks before scale decisions. | Closed |
| Counterexamples on AI coaching recommendation quality were thin | Without counterexamples, teams may assume “more precise AI suggestions” always improves rep outcomes. | Added peer-reviewed evidence showing over-precise AI recommendations can hurt self-efficacy without manager mediation. | Closed |
| Long-term causal evidence on sales-training retention is limited | Budget lock-ins may assume persistent uplift without public RCT support. | Explicitly marked as pending confirmation and required 6-12 month cohort validation before annual lock-in. | Pending |
| Assumption | Default | Why | Update trigger |
|---|---|---|---|
| Ramp gain conversion coefficient | 0.36 | Avoids over-crediting short-term onboarding gains. | Replace with cohort data when available. |
| Manager capacity baseline | 8 hours/week | Coaching execution is the behavior-change bottleneck. | Recalibrate if manager-to-rep ratio shifts >20%. |
| Compliance penalty | 4-6 points | Reflects legal review latency and rollout constraints. | Lower only after legal SLA is proven stable. |
| Concept | What it includes | What it is not | Minimum condition | Failure signal |
|---|---|---|---|---|
| AI coaching and performance tracking | Adjusts drills by role, region, and behavior signals. | One-size-fits-all script generation. | Needs clean CRM stages + coaching feedback loops. | Advice quality converges to generic templates after week 2. |
| AI automation | Speeds note taking, summaries, and follow-up drafts. | Does not by itself improve rep skill progression. | Track if saved time is reinvested in coaching. | Admin workload drops but win-rate and ramp stay flat. |
| AI coaching recommendation | Prioritizes next-best coaching actions with confidence tags. | Fully autonomous performance evaluation. | Needs manager calibration cadence and documented overrides. | Manager disagreement rises for three consecutive cycles. |
| AI performance scoring in employment context | Flags coaching-risk patterns and routes high-impact decisions to human review. | Sole basis for promotion, compensation, or disciplinary actions. | Requires bias audit cadence, accommodation path, and override logging. | No annual audit evidence or no documented appeal channel for impacted employees. |
| Autonomous coaching agent | Can orchestrate prompts and sequencing with minimal supervision. | Not suitable as default in high-compliance environments. | Requires explicit legal gates, audit logs, and fallback controls. | Unable to provide traceable rationale for high-impact feedback. |
| ID | Source | Key data | Published | Checked |
|---|---|---|---|---|
| S1 | Salesforce: State of Sales 2026 landing page | Salesforce State of Sales 2026 page states that nine in ten sales teams use agents or expect to within two years, and highlights 94% leader agreement that agents are essential to growth. | 2026-01 | 2026-03-05 |
| S2 | Salesforce State of Sales Report 2026 (PDF) | The report PDF (updated 2026-01-27) highlights agent and AI execution constraints, including that 51% of sales leaders report tech silos hinder AI impact. | 2026-01-27 | 2026-03-05 |
| S3 | ATD 2023 State of Sales Training | Median annual sales training spend was USD 1,000-1,499 per seller; sales kickoff adds another USD 1,000-1,499. | 2023-07-05 | 2026-03-05 |
| S4 | McKinsey: State of AI in B2B Sales and Marketing | Nearly 4,000 decision makers surveyed: companies combining advanced commercial personalization with gen AI are 1.7x more likely to increase market share. | 2024-09-12 | 2026-03-05 |
| S5 | NBER Working Paper 31161 | Study of 5,179 support agents: generative AI increased productivity by 14% on average, with 34% gains for novice and low-skilled workers. | 2023-04 (rev. 2023-11) | 2026-03-05 |
| S6 | NBER Working Paper 32966 | Nationally representative 2024-2025 surveys show rapid adoption (39.4% adults used gen AI), but work-hour intensity remains concentrated at roughly 1-5%. | 2024-08 (rev. 2025-08-26) | 2026-03-05 |
| S7 | European Commission: EU AI Act | AI Act entered into force on 2024-08-01; prohibited practices applied from 2025-02-02, GPAI obligations from 2025-08-02, and high-risk obligations from 2026-08-02. | 2024-08-01 (timeline checked 2026-02-18) | 2026-03-05 |
| S8 | NYC DCWP: Automated Employment Decision Tools | Employers must complete an independent bias audit within one year before using an AEDT and provide candidate/employee notice at least 10 business days in advance. | 2023-07-05 | 2026-03-05 |
| S9 | ADA.gov: AI guidance for disability rights | Employers remain responsible for ADA compliance when using AI tools and must provide reasonable accommodation plus alternatives where AI may screen out people with disabilities. | 2024-05-16 | 2026-03-05 |
| S10 | NIST AI RMF Playbook | Playbook keeps govern-map-measure-manage implementation patterns and notes AI RMF 1.0 is being revised; update plans should avoid hard-coding stale controls. | 2023-01 (revision note checked 2025-11-20) | 2026-03-05 |
| S11 | NIST AI 600-1 (Generative AI Profile) | Published in July 2024 to extend AI RMF with GenAI-specific guidance across content provenance, misuse monitoring, and model risk controls. | 2024-07 | 2026-03-05 |
| S12 | ISO/IEC 42001:2023 AI management systems | First certifiable international AI management system standard, published in December 2023. | 2023-12 | 2026-03-05 |
| S13 | EUR-Lex: GDPR Article 22 | Individuals have the right not to be subject to decisions based solely on automated processing with legal or similarly significant effects. | 2016-04-27 | 2026-03-05 |
| S14 | Journal of Business Research (2025): AI precision in coaching | Two studies (N=244, N=310) found that highly precise AI recommendations can lower salespeople self-efficacy and degrade coaching outcomes without manager mediation. | 2025-05 | 2026-03-05 |
| S15 | NBER Working Paper 34174 | An estimated 25%-40% of workers in the US and Europe are in jobs where retraining for AI-supported software development tasks can improve productivity. | 2025-09 | 2026-03-05 |
| Topic | Status | Impact | Minimum action |
|---|---|---|---|
| 12-month retention uplift from AI-powered coaching programs | Pending | No reliable public RCT was found for this exact scenario; annual ROI can be overstated. | Mark as pending confirmation and run 6-12 month cohort validation before annual budget lock-in. |
| Cross-jurisdiction employment AI obligations | Partial | EU, NYC, and disability-rights obligations differ by trigger and timeline, which can delay global rollout if treated as one policy. | Maintain jurisdiction-level control matrices and refresh legal checkpoints quarterly. |
| Manager scoring consistency across cohorts | Known | Inconsistent scorecards reduce trust in AI recommendations. | Keep biweekly calibration and archive override logs for auditability. |
| Recommendation granularity by rep seniority | Partial | Overly precise AI recommendations can reduce self-efficacy for certain seller cohorts and weaken outcomes. | A/B test feedback granularity and require manager-mediated coaching for low-confidence cohorts. |
| Usage intensity to KPI elasticity | Partial | Fast adoption headlines may still map to small AI-assisted work-hour share, creating inflated short-term ROI expectations. | Set scale gates on weekly active usage and AI-assisted hours before extrapolating quota lift. |
Use structured comparisons and risk controls to make practical rollout choices.
| Dimension | Manual training | AI generic | Hybrid planner | Autonomous agent |
|---|---|---|---|---|
| Time-to-value | Slow (8-16 weeks) | Medium (4-8 weeks) | Medium-fast (3-6 weeks) | Fast setup, volatile outcomes |
| Data prerequisites | Low; relies on human notes | CRM baseline + prompt templates | CRM + conversation + manager feedback loops | Full signal stack + strict data governance |
| Governance load | Low | Medium | Medium-high with explicit controls | High |
| Evidence strength | Operational history, low transferability | Vendor evidence, mixed rigor | Cross-source + pilot validation required | Limited public evidence in sales-training context |
| Typical failure mode | Manager capacity bottleneck | Template drift and low adoption | Calibration not maintained after pilot | Compliance and explainability breakdown |
| Best-fit condition | Small teams with senior coaches | Need fast enablement with low setup cost | Need measurable uplift with controlled risk | Only with mature governance and legal approvals |
| Risk | Trigger | Business impact | Tradeoff | Minimum mitigation | Source + date |
|---|---|---|---|---|---|
| EU compliance deadline missed | EU-facing rollout without controls for the 2025-02-02, 2025-08-02, and 2026-08-02 milestones. | Launch delay, legal exposure, and forced feature rollback. | Faster launch vs regulatory certainty. | Map controls to EU AI Act timeline and keep jurisdiction-level legal sign-off gates. | S7 (timeline checked 2026-02-18) |
| Employment-decision challenge from workers | Promotion, compensation, or disciplinary outcomes are tied to AI scores without audit, notice, or accommodation channels. | Program trust drops, complaints rise, and regional deployment can be blocked by regulators or works councils. | Automation efficiency vs legal defensibility. | Require annual bias audits, 10-business-day notice, accommodation workflow, and documented human appeal paths. | S8,S9,S13 |
| Data quality debt masks true coaching impact | Revenue systems are disconnected and frontline data cleaning is delayed. | Confidence score inflates while real behavior change stalls. | Speed of rollout vs reliability of metrics. | Gate scale decisions on data hygiene KPIs and calibration pass rates. | S1,S10 (rev. note 2025-11-20) |
| Manager adoption fatigue | Calibration sessions or manager-mediated coaching loops are skipped for multiple cycles. | AI suggestions drift from frontline reality and over-precise feedback can reduce seller confidence. | Lower management overhead vs sustained coaching quality. | Protect manager coaching capacity and tie calibration completion to operating reviews. | S1,S3,S14 |
| Adoption-intensity mismatch | Leadership extrapolates annual quota uplift before weekly active usage and AI-assisted hours clear minimum thresholds. | Forecast bias, budget misallocation, and rollout fatigue after early optimism. | Fast narrative wins vs measurable execution depth. | Set hard gates on weekly active usage and AI-assisted work-hour share before scaling ROI assumptions. | S6 |
| Over-claiming long-term ROI without public causal evidence | Annual budget is locked based on short pilot uplifts only. | Forecast bias and painful rollback if uplift decays after quarter two. | Aggressive scaling narrative vs defensible financial planning. | Label as pending and require 6-12 month cohort evidence before full lock-in. | S5,S14,S15 |
| Scenario | Assumptions | Process | Expected outcome | Counterexample / limit |
|---|---|---|---|---|
| Enterprise onboarding acceleration | 80 reps, weekly coaching, medium compliance. | Run six-week pilot across two cohorts. | Ramp reduction 2.5-4.5 weeks with confidence ~75. | If manager calibration drops below 80% completion for two cycles, projected gains usually do not hold. |
| Regulated mid-market pilot | 32 reps, high compliance, partial taxonomy. | Restrict automated coaching recommendations to legal-approved script domains. | Pilot recommendation with controlled ROI and lower risk. | If region-specific consent controls are absent, rollout should pause even when pilot KPIs look positive. |
| Resource-constrained team | 20 reps, monthly coaching, CRM-only signals. | Run 30-day stabilization sprint before pilot. | Stabilize tier until readiness and confidence improve. | If data quality and taxonomy stay unchanged, automation may increase activity but not quota attainment. |
Stage1c gate snapshot with explicit blocker/high thresholds and tracked medium/low backlog items.
blocker
0
high
0
medium
0
low
1
Gate status: PASS (stage1c, blocker=0, high=0)
Audit snapshot refreshed on 2026-03-05. Pending evidence is explicitly labeled and gated from scale decisions.
| Gap | Why it matters | Update | Status |
|---|---|---|---|
| Source registry had stale links and weak freshness metadata | Broken or undated sources reduce auditability and make leadership sign-off harder. | Rebuilt the registry with accessible, dated references (S1-S15), including refreshed ATD URL and explicit survey scope. | Closed |
| Risk section under-covered US employment AI obligations | Performance tracking can become employment decision input, creating legal exposure if audit and accommodation paths are missing. | Added NYC LL144 and ADA obligations with concrete triggers, and tied them to boundary/risk tables. | Closed |
| Adoption breadth was conflated with true execution depth | High headline adoption can still hide low weekly usage intensity, causing ROI over-forecast. | Added NBER intensity data (weekly usage + work-hour share) and required active-usage checks before scale decisions. | Closed |
| Counterexamples on AI coaching recommendation quality were thin | Without counterexamples, teams may assume “more precise AI suggestions” always improves rep outcomes. | Added peer-reviewed evidence showing over-precise AI recommendations can hurt self-efficacy without manager mediation. | Closed |
| Long-term causal evidence on sales-training retention is limited | Budget lock-ins may assume persistent uplift without public RCT support. | Explicitly marked as pending confirmation and required 6-12 month cohort validation before annual lock-in. | Pending |
Grouped FAQ supports decision intent, then hands off to actionable next paths.
Design structured coaching loops and role-based enablement plans.
Build role-play drills and skill scorecards for frontline reps.
Evaluate rep capability and prioritize coaching actions.
Use tool outputs for immediate execution and keep report evidence in decision memos for auditability.
This section closes the capability-specific gap: what to evaluate, who should use it now, who should not, and how to reduce rollout risk with auditable controls.
5
6
3
13
Audit snapshot updated on 2026-03-05. Only evidence-backed conclusions are moved into rollout gates.
| Gap found | Decision impact | Enhancement made | Evidence | Status |
|---|---|---|---|---|
| Capability claims lacked hard, dated evidence anchors. | Makes procurement review vulnerable to narrative-only bias. | Rebuilt evidence registry with dated primary sources and bound key conclusions to source IDs. | C1/C3/C4/C5/C11 | Closed |
| Compliance boundary was generic and not trigger-based. | Teams may unknowingly move from coaching assist into regulated employment decisions. | Added AI Act timeline, high-risk employment scope, NYC LL144 triggers, and FTC accountability guardrails. | C5/C6/C7/C8 | Closed |
| ROI narrative did not reflect cohort heterogeneity. | Overstates short-term payoff and can lead to premature scale decisions. | Added novice-vs-experienced uplift split and work-hour penetration boundary to force pilot gating. | C3/C4 | Closed |
| Cross-vendor benchmark assumptions were implicit. | Review-site ranking can be mistaken as controlled product evidence. | Explicitly marked benchmark limitations and introduced pending-evidence items requiring vendor-side validation. | C12 | Partial |
| Capability | Why it matters | Minimum evidence | Status | Next action |
|---|---|---|---|---|
| Conversation intelligence quality | Directly impacts feedback relevance, coach trust, and downstream adoption. | Show score consistency by role, region, and language, plus blinded manager agreement checks (quarterly). | Strong | Keep monthly calibration and log every override reason before scale. |
| Roleplay and simulation depth | Determines whether reps can transfer practice to live deals. | Map scenario library to sales stages and objection taxonomy, then prove transfer in live-call QA sampling. | Medium | Add a stage-to-scenario coverage gate before renewal. |
| Manager workflow integration | Without manager-in-loop workflow, AI recommendations become shelfware. | Track manager acceptance rate and coaching completion SLA by team and tenure cohort. | Medium | Track manager acceptance weekly and tie to coaching OKRs. |
| Governance and explainability controls | Protects against unsafe automation in high-impact people decisions. | Document policy, appeal path, and traceable logs for every high-impact recommendation (C5/C6/C7/C8). | Gap | Set red-line policy: no compensation or promotion automation without legal sign-off. |
| Data and system interoperability | Capability value collapses if CRM, call, and LMS data remain siloed. | Bidirectional sync for account, rep, and coaching-event entities, with incident SLA tracking (C1). | Gap | Pilot with one governed integration path before broad rollout. |
| Decision question | Gate condition | If not met | Source ID |
|---|---|---|---|
| Can we scale to all teams now? | Only if CRM + conversation + enablement systems are synced with owned incident SLA; 51% of leaders report siloed stacks limit AI initiatives. | Keep pilot scope and prioritize integration debt before budget expansion. | C1 |
| Can we treat activity lift as revenue lift? | No. Productivity impact varies by cohort; measured gains can be strong for less-experienced workers and weak for experienced cohorts. | Split dashboards by tenure cohort and require outcome metrics (win-rate, cycle time) before rollout. | C3 |
| Does broad AI usage imply deep workflow impact? | Not automatically: national survey evidence shows adoption is broad but AI-assisted work-hour share is still limited. | Avoid full-automation commitments; keep recommendation-first operating model. | C4 |
| Can coaching scores influence promotion or termination in EU context? | Treat as high-risk employment AI and satisfy timeline-based obligations before use. | Restrict to assistive coaching recommendations with mandatory human review. | C5/C6 |
| Can NYC teams deploy without additional process? | No. LL144 requires independent bias audit within one year and at least 10 business-day notice before AEDT use. | Do not activate decision-impact workflows for NYC employees. | C7 |
| Should software budget replace enablement investment? | Use as additive planning baseline: ATD reports median annual sales training spend of USD 1,000-1,499 per seller plus similar kickoff spend. | Do not model ROI assuming training costs collapse to near-zero. | C11 |
| Segment | Suitable | Not suitable | Minimum gate |
|---|---|---|---|
| Mid-market B2B teams with active enablement ops | Can operationalize coaching playbooks quickly and maintain manager follow-through with explicit owners. | Not suitable if data silos remain unresolved; Salesforce reports 51% leaders see siloed stacks limiting AI initiatives (C1). | Dedicated owner + weekly review + cross-system entity sync checklist. |
| Enterprise multi-region sales orgs | Benefit from standardized coaching taxonomy, common score definition, and legal-ready governance model. | Not suitable for immediate rollout when EU employment-impact scenarios are unresolved, because AI Act classifies these as high-risk (C6). | Region-specific compliance overlays before phase-2 launch; map AI Act timeline milestones (C5). |
| Resource-constrained teams | Use pilot-first path when baseline data quality is acceptable and decisions remain recommendation-only. | Not suitable if expecting immediate full-team ROI: NBER shows AI gains are heterogeneous and can be near zero for experienced workers (C3). | Run a 30-60 day pilot with novice vs experienced cohort split before automation commitments. |
| Option | Best for | Upside | Limitation | Counterexample / not fit | Evidence |
|---|---|---|---|---|---|
| CRM-native coaching module | Single-CRM teams with low integration headcount. | Faster launch and lower initial integration complexity. | May inherit existing silo constraints if conversation and enablement data stay fragmented. | Underperforms when coaching events cannot sync back to CRM entities with SLA. | C1 |
| Best-of-breed coaching suite | Teams needing deeper conversation analytics and simulation depth. | Typically stronger AI coaching depth and scenario authoring controls. | Requires stronger governance and integration ownership to avoid stack fragmentation. | Not suitable if the org cannot support cross-system instrumentation and legal review cadence. | C1/C5/C6 |
| Human-led coaching with AI recommendation assist | Regulated teams or early-stage programs with uncertain data quality. | Lower legal exposure and better context adaptation in ambiguous calls. | Scaling speed is constrained by manager bandwidth. | If manager follow-through remains low, AI insights still become shelfware. | C3/C8 |
| Review-site-led vendor shortlist only | Very early market scan only, not final procurement. | Fast way to identify a broad candidate set. | Methodology is review-weighted and not controlled product benchmarking. | Do not sign multi-year contracts without same-script pilot evidence. | C12 |
| Risk | Trigger | Mitigation | Fallback path |
|---|---|---|---|
| Employment decision compliance risk | Coaching score is reused for promotion, termination, or compensation decisions without legal gates (C5/C6/C7/C8). | Keep human-in-the-loop approval, bias-audit cadence, notice workflow, and documented appeal path. | Switch to recommendation-only mode until compliance controls pass legal and policy review. |
| False ROI confidence | Topline activity lift is interpreted as universal revenue lift; NBER shows gains vary by tenure and can be minimal for experienced workers (C3). | Separate activity metrics from deal outcomes and report novice-vs-expert cohorts independently. | Pause expansion and run one full controlled cycle before renewing spend assumptions. |
| Integration fragility | Key entities sync one-way or delayed; 51% of sales leaders report disconnected stacks limiting AI initiatives (C1). | Define bidirectional sync SLO, ownership, and incident runbook before launch. | Limit scope to one pipeline stage and manual escalation path. |
| Benchmark illusion risk | Vendor shortlist is selected from review-site rank only; review methods are user-review weighted and not controlled head-to-head tests (C12). | Require same-script pilot, same-cohort scoring checks, and documented win/loss deltas before procurement. | Delay contract term extension and run a 6-8 week validation sprint. |
The items below remain open because reliable public datasets are limited. Keep them out of final procurement scoring until validated.
| Topic | Current status | Why pending | Minimum executable fix |
|---|---|---|---|
| Cross-vendor benchmark for multilingual coaching accuracy | Pending: no reliable public benchmark | Public sources do not provide reproducible head-to-head benchmarks across vendors and languages. | Require vendor-provided confusion matrix by language + third-party audit sample. |
| Longitudinal causal proof from coaching score to quota attainment | Pending: limited public causal studies | Most public evidence focuses on productivity proxies, not 12-month quota outcomes. | Run controlled cohort with pre-registered KPI and full-cycle win/loss analysis. |
| Security incident base rates for transcript retention vendors | Pending: fragmented disclosures | Public disclosures are inconsistent and do not support normalized risk comparison. | Collect SOC2/ISO27001 evidence, incident disclosure SLA, and data-retention policy red-lines. |
| ID | Source | Fact used | Published | Checked |
|---|---|---|---|---|
| C1 | Salesforce State of Sales Report 2026 (PDF) Open source | Surveyed 4,050 sales professionals in 22 countries (Aug-Sep 2025); 54% teams already use AI agents and another 34% plan to within two years. | 2026-01-27 | 2026-03-05 |
| C2 | Salesforce State of Sales (Web) Open source | Landing page highlights: 9 in 10 teams use or expect to use agents within two years, and 94% of sales leaders see agents as essential to growth. | 2026-01 | 2026-03-05 |
| C3 | NBER Digest on Working Paper 31161 Open source | Generative AI assistant increased worker productivity by about 14% on average; gains were much larger for less-experienced workers and near zero for the most experienced. | 2023-06-26 | 2026-03-05 |
| C4 | NBER Working Paper 32966 Open source | By Aug 2024, 39.4% of US adults ages 18-64 had used generative AI; estimated AI-assisted work time remained around 1%-5% of work hours. | 2024-08 (rev. 2025-08-26) | 2026-03-05 |
| C5 | European Commission: Regulatory framework for AI Open source | AI Act entered into force on 2024-08-01; prohibited practices and AI literacy obligations started from 2025-02-02; GPAI obligations from 2025-08-02; high-risk obligations from 2026-08-02. | 2024-08-01 | 2026-03-05 |
| C6 | EU resource: Questions and Answers on the AI Act Open source | Employment-related systems (recruitment, selection, performance evaluation, promotion/termination decisions) are identified as high-risk and require controls like risk management, data governance, transparency, and human oversight. | N/A (living FAQ page) | 2026-03-05 |
| C7 | NYC Local Law 144 (AEDT) text Open source | Requires independent bias audit within one year before use, notice at least 10 business days in advance, and civil penalties (USD 500 first violation; USD 500-1,500 for subsequent violations). | 2021-12-10 (effective 2023-07-05) | 2026-03-05 |
| C8 | FTC / DOJ / CFPB / EEOC Joint Statement on AI Open source | US regulators state that AI does not exempt companies from existing legal obligations and signal coordinated enforcement in areas such as civil rights and consumer protection. | 2023-04-25 | 2026-03-05 |
| C9 | NIST AI Risk Management Framework 1.0 Open source | Published in January 2023 as a voluntary framework with Govern / Map / Measure / Manage functions for trustworthy AI risk management. | 2023-01-26 | 2026-03-05 |
| C10 | NIST AI 600-1 (Generative AI Profile) Open source | Published on 2024-07-26 as a cross-sector profile extending AI RMF for generative AI specific risks. | 2024-07-26 | 2026-03-05 |
| C11 | ATD 2023 State of Sales Training Open source | For surveyed organizations (n=71), median annual sales training investment was USD 1,000-1,499 per seller, with sales kickoff often adding another USD 1,000-1,499. | 2023-07-05 | 2026-03-05 |
| C12 | G2 Research Scoring Methodology Open source | Category inclusion requires minimum recent review volume (for example, at least 10 reviews in 12 months), and rankings are review/data-model based rather than controlled vendor benchmark experiments. | N/A (rolling methodology page) | 2026-03-05 |
| C13 | ISO/IEC 42001:2023 AI management systems Open source | Published on 2023-12-18 and described by ISO as the first international certifiable AI management system standard. | 2023-12-18 | 2026-03-05 |
Act first: input your coaching baseline and generate capability readiness, confidence, and next actions. Decide next: validate method, evidence quality, suitability boundaries, and competitor tradeoffs before procurement.
Finish input, output interpretation, and action recommendation in one flow without switching pages.
Each result includes capability score, confidence, suitability, and fallback path instead of a raw label.
Use key metrics, suitable/not-suitable guidance, and boundary notes to avoid over-generalized vendor choices.
Review method, source registry, comparison matrix, risk controls, scenarios, and grouped FAQ before committing spend.
Enter team scale, quota signals, coaching capacity, data readiness, and compliance constraints.
Get readiness tier, confidence range, capability priorities, risk flags, and a clear rollout recommendation.
Check key numbers, suitable/not-suitable segments, and uncertainty markers before creating a shortlist.
Review methodology, source links, comparison tables, scenario examples, and mitigation paths for go/no-go.
Run the tool layer for speed and use the report layer for decision confidence.
Start capability planner