| Predictive model minimum sample | >= 40 qualified + >= 40 disqualified leads in last 12 months | Insufficient class volume increases variance and weakens score stability. | Use rules-assisted scoring and keep manual checkpoint review until sample grows. (R3) |
| Predictive model release gate | AUC/F1 must pass a vendor internal threshold; public docs do not disclose one universal numeric cutoff | Prevents teams from using unverifiable numeric folklore as release criteria. | Define internal threshold policy with holdout validation and document it in RevOps governance. (R9) |
| Temporal split validation protocol | Use time-ordered splits where training windows precede test windows (for example, forward chaining / TimeSeriesSplit) | Random splits on ordered data can leak future information and overstate forecast quality. | Reject launch decisions from leakage-prone evaluation and rerun validation with chronological splits. (R20) |
| Backtest-window design | Backtest window >= forecast horizon and < half of full dataset; use 1 to 5 backtests for reliability | Insufficient or malformed backtests can produce unstable metrics and misleading readiness signals. | Increase window quality and rerun evaluation before approving forecast automation scope. (R18) |
| Point + interval forecast requirement | Track mean forecast plus quantiles (for example P10/P50/P90) instead of one point estimate only | Interval views expose downside and upside spread that single-point metrics often hide. | If interval metrics are unavailable, keep manual review for downside-sensitive routing decisions. (R18) |
| Near-zero denominator boundary in metric interpretation | Treat windows with near-zero observed totals as boundary states because wQL/WAPE can be undefined | Undefined metric windows can be misread as low error and cause false approval of weak models. | Flag those windows explicitly and use numerator/error-sum diagnostics before operational decisions. (R18) |
| Signal design for operations score | Use fit + engagement + combined score properties | Single-signal scoring is brittle and can inflate false positives. | Split score logic into separate properties and require multi-signal agreement. (R4) |
| Predictive opportunity model retraining cadence | Re-evaluate model quality at least every 15 days once predictive opportunity scoring is enabled | Biweekly cadence reduces silent drift risk when campaign mix or lead quality shifts rapidly. | If monitoring capacity is low, keep hybrid mode and use manual checkpoint review until cadence is staffed. (R11) |
| Score freshness and cap policy | Use score limits plus decay windows (1 / 3 / 6 / 12 months) based on your sales-cycle length | Without decay and caps, old engagement events can overstate current intent and create routing noise. | Start with 3-month decay for short cycles, 6-12 months for enterprise cycles, then adjust by false-positive trend. (R12) |
| Governance operating model | Map, Measure, Manage under a formal governance function | Without lifecycle governance, drift and policy violations accumulate silently. | Create a monthly risk review cadence aligned to NIST AI RMF functions. (R5) |
| Drift monitoring implementation gate | Define baseline constraints and automated alerts for data/model quality drift before broad rollout | Without continuous monitoring, forecast performance can decay silently in production. | Keep rollout at pilot scope until baseline constraints and alert pathways are operational. (R23) |
| Greenfield service availability | Amazon Forecast onboarding is closed to new customers (effective July 29, 2024) | Architecture plans based on unavailable services can delay implementation and procurement cycles. | Choose an actively onboardable forecasting stack before finalizing implementation roadmap. (R19) |
| Enterprise governance baseline for procurement | When legal/security signoff requires auditable governance, align controls to ISO/IEC 42001 management-system practices | Procurement and compliance teams often need process evidence beyond model metrics alone. | Document policy, ownership, and control evidence in an AI management system register before scale. (R17) |
| Solely automated significant decisions (UK GDPR Article 22) | If legal or similarly significant effects exist, safeguards and human challenge paths are required | Purely automated disqualification can create legal and trust risk in regulated markets. | Route high-impact outcomes to manual review and provide escalation/appeal workflow. (R7) |
| EU rollout phase gate | 2 Feb 2025 (prohibited practices + literacy), 2 Aug 2025 (GPAI), 2 Aug 2026 (most obligations), 2 Aug 2027 (selected existing-system obligations) | Compliance obligations activate in phases and may differ by deployment scope. | Sequence deployment by jurisdiction and milestone instead of one global cutover. (R6) |
| EU AI literacy supervision checkpoint | Article 4 obligations apply from February 2, 2025; supervision/enforcement rules apply from August 3, 2026 | Teams can miss legal readiness if they assume literacy obligations start only with formal enforcement. | Run internal literacy controls now and complete evidence packs before August 2026 supervisory start. (R21) |
| Colorado high-risk AI compliance date | SB25B-004 extends SB24-205 requirement date to June 30, 2026 | Incorrect U.S. state timelines can mis-sequence legal review for consumer-impacting systems. | Update state-level compliance calendars and verify counsel signoff before production expansion. (R22) |