Salesforce State of Sales, 2026-02-03
AI usage in sales and marketing is now mainstream
87%
Salesforce reports 87% AI usage in sales teams, based on 4,050 sales professionals surveyed between August and September 2025.

Use this AI use cases in sales and marketing page to generate messaging and follow-up use cases, then validate fit, risk, and rollout readiness before spending budget.
Generate practical sales and marketing use cases, follow-up steps, and KPI checkpoints from one sales and marketing brief.
0/120
0/180
0/160
0/120
0/90
0/260
Pick a use-case scenario, generate immediately, then adapt the output to your pipeline.
Complete the three required fields, then generate to get sales and marketing use cases you can copy, export, and validate below.
Users can input context and generate actionable outputs before reading the deep report.
Each output includes positioning, sequencing, objections, and KPI checkpoints with clear next actions.
Key claims map to explicit sources, timestamps, and sample context so teams can verify quickly.
Comparison, boundary, and risk sections help teams choose a rollout path instead of collecting generic tips.
Add product value, audience, platform, tone, and goal so the generator has decision-grade signals.
Review positioning, copy variants, follow-up flow, objections, and KPI checklist before sharing.
Use the mid-page benchmark cards to classify your use case as fit, conditional, or not-fit.
Use the risk matrix to set human review gates, compliance checks, and data handling boundaries.
Generate your execution pack first, then launch with benchmark alignment and explicit risk controls.
Generate and ValidateRead in this order: conclusions → boundaries → methodology → concept limits → comparison → trade-offs → risk → scenarios → evidence gaps → sources.
Use these signal cards to decide whether to pilot now, delay rollout, or tighten governance first.
Salesforce State of Sales, 2026-02-03
87%
Salesforce reports 87% AI usage in sales teams, based on 4,050 sales professionals surveyed between August and September 2025.
Salesforce State of Sales, 2026-02-03
54% / 90%
54% of sales orgs already use AI agents and nearly 90% plan to by 2027, which raises implementation pressure on review and control layers.
Federal Reserve FEDS Note, 2026-04-03
18% / 41% / 78%
A 2026 Federal Reserve note reports 18% firm-level AI use, 41% employee-level GenAI use for work, and 78% employee coverage inside AI-using firms.
Federal Reserve FEDS Note, 2026-04-03
+68% / >20%
The same note shows pre-revision business AI use grew 68% year over year by end-2025, and over 20% of businesses expect to use AI in the first half of 2026.
U.S. Census CES-WP-26-25, 2026-04
18% / 22%
The U.S. Census 2026 AI supplement (reference period: Nov 2025-Jan 2026) reports 18% firm-level functional AI use, with expected firm-level adoption of 22% within six months.
U.S. Census CES-WP-26-25, 2026-04
52% / 57%
Among U.S. firms that adopted AI, 52% use it in sales and marketing, while 57% of adopters deploy AI in three or fewer business functions.
U.S. Census CES-WP-26-25, 2026-04
32% / 23% / 41%
The same Census evidence shows 32% employment-weighted firm adoption, but only 23% of firms (41% employment-weighted) report worker task-level AI use.
Eurostat AI Statistics, 2025-12-11
19.95% vs 55.03%
Eurostat 2025 data shows 19.95% AI adoption across EU enterprises overall versus 55.03% among large enterprises.
OECD SME AI Adoption Report, 2025-12-09
11.9% vs 40.0% / 29%
OECD 2025 reports 11.9% AI adoption for SMEs versus 40.0% for large firms; among GenAI-using SMEs, only 29% use it in at least one core business activity.
Eurostat AI Statistics, 2025-12-11
34.70% / 70.89%
Among EU enterprises already using AI, 34.70% apply it in marketing/sales. Top blocker is lack of expertise (70.89%), followed by legal uncertainty and data privacy concerns.
NBER Working Paper 33795, 2025-03
80% / >2 hours
NBER 2025 evidence across 66 firms and 7,137 workers found 80% of active users saved more than two hours per week on email, with no statistically significant task-composition change.
NBER Working Paper 33777, rev. 2026-01
>25,000 / no >2% effect
A revised NBER 2026 study covering over 25,000 workers in Denmark found no statistically significant wage or hours effects larger than 2% two years after LLM rollout.
EU AI Act Service Desk + EC FAQ, updated 2026-05-07
€15M / €35M (3% / 7%)
EU AI Act transparency duties (including Article 50) apply from August 2, 2026; Article 50 breaches can be fined up to €15 million or 3% of global annual turnover, while prohibited-practice violations can reach €35 million or 7%.
FTC Press Releases, 2024-09 & 2025-02
$193,000
The FTC announced a deceptive-AI-claims crackdown in September 2024 and finalized a DoNotPay order in February 2025 with $193,000 monetary relief and strict claim limits.
Boundary checks prevent overconfident rollout. If your context matches multiple non-fit signals, clean up process and governance before scaling.
Stable lead flow with at least three segmentation dimensions
You can segment leads by ICP, channel, and stage, then run controlled comparisons with enough sample stability.
Structured CRM process with constrained fields
You already have stage transitions and field governance to map generated outputs into trackable execution.
Ability to run 2-4 week experiments with review
You can compare baseline and AI-assisted workflows on response, meeting-booked, human-edit, and compliance-rejection rates.
Human review and evidence logging are accepted
Managers can review sensitive claims, discounts, and compliance language, and keep audit evidence for decisions.
Critical data gaps and inconsistent definitions
No historical message-performance data or inconsistent stage definitions will weaken output quality and attribution confidence.
No channel policy standards
If channel limits, prohibited terms, and claim boundaries are undocumented, error rates and rework costs spike.
No review loop or accountable owner
Without ownership and weekly review cadence, pilots drift into anecdotal decisions and “speed-only” optimization.
Regulated sales without approval workflow
In finance, health, or legal contexts, missing approvals can create material compliance exposure.
Tool layer solves task completion. Report layer validates trust, boundaries, and rollout readiness.
Normalize product value, audience, platform, tone, and goal into consistent decision fields.
Generate deterministic structured outputs first, then optionally add AI-enhanced insights.
Validate outputs against benchmark metrics, source quality, and fit boundaries.
Recommend pilot scope, risk controls, and explicit next actions for execution.
These defaults define the minimum viable rollout path. Replace them with your team-specific constraints when needed.
| Assumption | Default | Boundary | Why It Matters |
|---|---|---|---|
| Pilot duration | 2-4 weeks | <2 weeks = noisy; >6 weeks = confounded by external shifts | Duration strongly affects signal quality and attribution confidence. |
| Primary KPI set | Response rate / Meeting-booked rate / Human edit rate / Compliance rejection rate | Use at least three metrics to avoid one-dimensional optimization | Single-metric wins often hide quality or compliance regressions. |
| Human review scope | Pricing, claims, compliance language, sensitive industries | For regulated sectors, full review is mandatory | Most high-impact failures happen at unreviewed outbound steps. |
| Regulatory timeline baseline (EU-facing workflows) | Feb 2025 prohibited practices in force; Aug 2026 Article 50 transparency duties | If you message EU users, labeling, logs, and human oversight controls must be designed upfront; high-risk timing should be revalidated against official updates | Late compliance retrofits can trigger rollback, fines, or enforcement orders. |
| Metric denominator tagging | Report firm-level and employee-level adoption side by side | Do not compare 18% (firm-level) directly to 41%/78% (employee coverage) as if they were the same KPI | Denominator mismatch leads to wrong budget sizing and rollout maturity assumptions. |
| Functional expansion pace | Keep first rollout within <=3 business functions | If >3 functions launch in parallel, require dedicated attribution ownership and rollback thresholds | Census 2026 shows 57% of adopters still stay within three or fewer functions; premature breadth increases debugging and governance load. |
| AI-claim substantiation | Every external AI capability/outcome claim must map to evidence | No-evidence claims must not be auto-sent in sales or marketing assets | FTC enforcement now includes monetary relief and claim restrictions. |
| Model strategy | Template fallback + optional AI enhancement + human review | Output must remain complete when AI API is unavailable or confidence is low | Operational reliability is mandatory for daily sales work. |
The term “AI in sales” spans very different accountability models. Define the layer first, then automate.
| Concept | Definition | Applies When | Not Fit When | Evidence |
|---|---|---|---|---|
| Assistive drafting layer | AI generates drafts, summaries, and objection prompts; humans approve before send. | You need speed gains with moderate risk and can keep human checks. | You need zero-human outbound in high-stakes claim-heavy contexts. | NBER 31161 (gains concentrated in assistive workflows and novice workers) |
| Measurement layer (firm vs employee denominator) | Firm-level adoption, employee-use rate, and employee-weighted coverage are different metrics. | Board updates and ROI reviews explicitly show denominator and sample window. | One favorable metric is used to claim blanket enterprise adoption. | Federal Reserve FEDS Note 2026 (18% / 41% / 78%) |
| Peripheral-task layer vs core-business layer | Using AI for drafting/summarization/search does not mean core revenue workflows are AI-ready. | Peripheral tasks are validated first, then expanded into pricing, negotiation, and renewal in controlled phases. | Teams equate “copy generation success” with end-to-end autonomous conversion readiness. | OECD 2025 (only 29% of GenAI-using SMEs apply it in at least one core activity) |
| Agent collaboration layer | AI can trigger multi-step tasks (retrieve, draft, follow-up) under guardrails. | You have approval gates, logs, rollback paths, and clear ownership. | No attribution trail exists and errors cannot be traced quickly. | Salesforce 2026 (54% current agent use in sales teams) |
| Efficiency layer vs financial-outcome layer | Hours saved and faster drafting do not automatically imply near-term wage, hours, or profit shifts. | Efficiency signals are treated as leading indicators, then validated against revenue and retention outcomes. | A 1-2 week efficiency uplift is converted directly into annual ROI assumptions. | NBER 33795 + NBER 33777 |
| Automated outbound layer | System sends messages autonomously while humans review by exception. | Channel policy is codified and knowledge sources are trustworthy. | Regulated or promise-heavy messaging requires deterministic verification. | FTC 2024 + EU AI Act transparency and claim obligations |
| High-risk decision layer | AI influences decisions tied to rights, eligibility, or sensitive outcomes. | Risk assessment, data quality controls, and human oversight are in place. | Opaque model outputs are used directly without explainability or review. | EU AI Act + NIST AI RMF governance requirements |
Choose a path based on operational maturity, not trend pressure, and account for governance cost.
| Option | Best For | Time To Value | Trade-Off | Recommendation |
|---|---|---|---|---|
| Generic prompt playground | Ad hoc ideation and message brainstorming | Fast (same day) | Low structure, weak governance, hard to audit | Use as a supplement, not as the primary outbound execution system. |
| CRM-native AI copilot | Teams with mature RevOps and established workflow ownership | Medium (2-8 weeks) | Higher implementation complexity and change-management effort | Best for scaled teams that need deep system integration. |
| Agent-first automation platform | High-volume outreach teams with enforceable governance controls | Medium-Slow (3-10 weeks) | Higher upside, but larger blast radius when control fails | Start in a low-volume sandbox and scale by risk tier. |
| This hybrid page (tool + report) | Teams that need immediate output plus decision confidence | Fast (pilot in one day) | Requires disciplined review and KPI tracking to stay reliable | Strong entry path before larger system investments. |
The real choice is not whether AI can generate content, but whether post-generation control cost stays acceptable.
| Decision | Upside | Downside | Guardrail |
|---|---|---|---|
| Launch same day (speed-first) | Fastest route to initial output and directional learning | Higher risk of unsupported claims and compliance misses | Limit automation to low-risk templates; require human approval for high-risk claims. |
| Prioritize CRM deep integration (consistency-first) | Higher traceability and cleaner long-term measurement | Higher setup cost and slower initial learning cycle | Use this page for pilot proof before committing full integration budget. |
| Scale agent-led outbound (scale-first) | Higher throughput and lower marginal execution cost | Lower personalization can erode trust if unchecked | Set frequency caps, quality sampling, and automatic rollback thresholds. |
| Expand many business functions at once (coverage-first) | Faster cross-team rollout and visible short-term “AI launched” progress | Attribution complexity and governance overhead can spike quickly | Roll out by layer (outbound -> follow-up -> renewal) and only expand after each layer passes KPI and compliance gates. |
| Optimize for time-saved only (metric-first) | Short-term weekly productivity gains are easier to demonstrate internally | Teams can end up “faster but not better” on meetings, revenue, and trust outcomes | Track meeting-booked rate, win rate, unsubscribe/complaint, and compliance rejection alongside hours saved. |
| Keep fully human execution (risk-first) | Maximum control over brand and regulatory exposure | Limited productivity gain and higher opportunity cost | Keep humans on high-risk steps, then automate low-risk steps incrementally. |
High-probability/high-impact risks should be controlled before scaling, or short-term gains will be offset by long-term rework and exposure.
| Risk | Probability | Impact | Trigger | Mitigation |
|---|---|---|---|---|
| Unsupported or exaggerated claims in outbound messaging | Medium-High | High | Generated content is sent without fact verification or evidence records | Maintain a claim-to-evidence registry and require manager approval for outcome/pricing claims. |
| Compliance mismatch by region/industry | Medium-High | High | No legal checkpoint for regulated communication or EU-facing transparency duties | Version legal templates, add review gates, and map controls to EU AI Act timelines. |
| Sensitive deal or personal data leakage | Medium | High | PII or confidential opportunity data is entered directly into generation pipelines | Apply data minimization, anonymization, role-based access, and export audit logs. |
| Channel-policy mismatch | Medium | Medium | Messages violate channel length/policy constraints | Add post-generation channel checks and auto-trimming rules. |
| Over-automation degrades buyer trust | Medium | Medium-High | No contextual personalization at critical touchpoints | Reserve high-stakes interactions for human customization. |
| External AI claims are not evidence-backed | Medium | High | Sales or marketing copy claims guaranteed AI outcomes without verifiable support | Use claim approval workflows, attach evidence links, and retain versioned legal review logs. |
| KPI denominator mismatch misleads leadership decisions | Medium | Medium-High | Firm-level adoption and employee-level use metrics are reported as one number | Require denominator labels, sample windows, and methodology-change notes in weekly dashboards. |
| AI literacy non-compliance after go-live | Medium | High | Teams use GenAI for ad copy or translation without documented literacy training, ownership, and supervision | Treat Article 4 literacy as a launch gate: training evidence, role accountability, and periodic audits. |
| Accidental use of EU-prohibited AI practices | Low-Medium | High | Workplace emotion-recognition or other prohibited patterns are embedded into automation under a “marketing efficiency” label | Run a prohibited-practice checklist before release; hard-block prohibited cases and require legal sign-off for edge cases. |
These examples include both positive paths and one failure pattern to clarify real rollout conditions.
| Scenario | Assumption | Process | Result |
|---|---|---|---|
| SaaS outbound team improves meeting-booked rate | 1,200 monthly leads, 3 SDRs, low response baseline | Generate three outreach variants and objection flows, then run a two-week segmented A/B test. | Faster prep time and clearer follow-up ownership; quality lift measured against baseline cadence. |
| B2B renewal rescue workflow | Renewal risk increasing for strategic accounts | Build renewal-risk scripts and escalation paths with legal review checkpoints. | Sales and customer success teams share one execution script and reduce handoff friction. |
| Cross-channel nurture alignment | Email and LinkedIn messaging are inconsistent | Generate unified value proposition, then split channel-specific variants by format constraints. | More consistent brand narrative and less message duplication fatigue. |
| Counterexample: automation launched before data cleanup | CRM fields are inconsistent but team pushes for immediate full automation | Generated content is sent at scale first, while instrumentation and field cleanup are delayed. | Send volume increases, but meeting quality and conversion stability do not improve; team reverts to human-plus-template mode. |
The items below currently lack strong public evidence. This page does not force deterministic conclusions on them.
Most public claims are vendor case studies or surveys with inconsistent definitions; large cross-industry RCT evidence is limited.
Minimum action: Run a 2-4 week baseline-vs-AI test with at least response, meeting-booked, and human-edit rates.
As of 2026-05, most available ROI numbers are vendor narratives rather than audit-grade financial benchmarks.
Minimum action: Build an internal payback model using deployment cost, labor savings, incremental revenue, and compliance overhead.
Short-term efficiency metrics are available, but cross-industry long-term trust and retention studies remain sparse.
Minimum action: Track unsubscribe, complaint, and NPS trend as gating metrics before expanding automated coverage.
Most available guidance is platform-specific or case-based; cross-industry, cross-channel threshold evidence is limited.
Minimum action: Set channel-specific red lines using your own historical distribution rather than a single universal cut-off.
Each key metric includes publication date, page update date, and intended use for transparent verification.
Salesforce - State of Sales 2026 (4,050 sales professionals)
https://www.salesforce.com/news/stories/state-of-sales-report-announcement-2026/
Published: 2026-02-03 | Updated: 2026-05-07
Use: Adoption rate, agent usage, and time-saving indicators
Used for 87% AI usage, 54% agent usage, 34%/36% expected time savings, and survey scope.
Federal Reserve - Monitoring AI Adoption in the U.S. Economy
https://www.federalreserve.gov/econres/notes/feds-notes/monitoring-ai-adoption-in-the-u-s-economy-20260403.html
Published: 2026-04-03 | Updated: 2026-05-07
Use: Firm-level adoption, employee-level usage, and methodology caveats
Used for 18% firm adoption, 41% worker use of GenAI, 78% employee coverage in AI-using firms, and revision-bound comparability notes.
U.S. Census Bureau CES Working Paper 26-25 - The Microstructure of AI Diffusion
https://www.census.gov/library/working-papers/2026/adrm/CES-WP-26-25.html
Published: 2026-04 | Updated: 2026-04-22
Use: Firm vs employment-weighted adoption, functional breadth, and sales/marketing functional share
Used for 18% firm adoption, 32% employment-weighted adoption, 22% six-month expectation, 52% sales/marketing functional use, 57% <=3-function deployment, and 23%/41% worker-task use.
Eurostat - AI in enterprises statistics (2025 edition, updated 2026-03)
https://ec.europa.eu/eurostat/statistics-explained/SEPDF/cache/106920.pdf
Published: 2025-12-11 | Updated: 2026-03
Use: Size-based adoption gap, sales/marketing functional use, and barriers
Used for 19.95% overall adoption, 55.03% large-enterprise adoption, 34.70% marketing/sales use-case share, and 70.89% expertise barrier.
OECD - AI adoption by small and medium-sized enterprises
https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/12/ai-adoption-by-small-and-medium-sized-enterprises_9c48eae6/426399c1-en.pdf
Published: 2025-12-09 | Updated: 2026-05-07
Use: SME vs large-firm adoption gap and core-activity penetration boundary
Used for 11.9% SME adoption vs 40.0% large-firm adoption, and the finding that only 29% of GenAI-using SMEs apply it in at least one core business activity.
NBER Working Paper 33795 - Generative AI and the Nature of Work
https://www.nber.org/system/files/working_papers/w33795/w33795.pdf
Published: 2025-03 (revised 2025-10) | Updated: 2026-05-07
Use: Task-level efficiency and boundary effects
Used for 66 firms and 7,137 workers, 80% active-user savings above two hours per week on email, and no significant task-composition shift.
NBER Working Paper 33777 - Large Language Models, Small Labor Market Effects
https://www.nber.org/system/files/working_papers/w33777/w33777.pdf
Published: 2025-02 (revised 2026-01) | Updated: 2026-05-07
Use: Longer-run labor-market counter-evidence
Used for the finding that wage and hours effects above 2% were not statistically significant two years after LLM launch in Denmark.
NBER Working Paper 31161 - Generative AI at Work
https://www.nber.org/papers/w31161
Published: 2023-04 (revised 2023-11) | Updated: 2026-05-07
Use: Assistive-workflow productivity heterogeneity boundary
Used for the boundary that average productivity gains are about 14% with stronger effects for lower-experience/lower-skill workers.
EU AI Act Service Desk - Implementation timeline
https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
Published: 2024-08-01 | Updated: 2026-03-07
Use: Phased applicability and enforcement ceilings
Used for Feb 2025 prohibited-practice start, Aug 2026 general applicability, and Article 50 penalty ceiling up to €15M or 3% turnover.
European Commission FAQ - AI literacy (Article 4)
https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers
Published: FAQ page (living document) | Updated: 2026-05-07
Use: Organizational AI literacy obligations and marketing-copy applicability
Used for the boundary that teams using GenAI for advertisement writing/translation still need to comply with AI literacy obligations from 2025-02-02 onward.
European Commission FAQ - Navigating the AI Act
https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act
Published: FAQ page (living document) | Updated: 2026-05-07
Use: Penalty tiers and prohibited-practice ceiling
Used for the risk ceiling that prohibited-practice violations can reach €35M or 7% of global annual turnover.
EU AI Act Service Desk - Article 50
https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50
Published: Regulation (EU) 2024/1689 | Updated: 2026-03-07
Use: Transparency obligations for AI-generated and manipulated content
Used for machine-readable disclosure requirements and deepfake/public-information transparency duties.
FTC - Crackdown on deceptive AI claims and schemes
https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
Published: 2024-09-25 | Updated: 2026-05-07
Use: Enforcement posture for AI marketing claims
Used to show deceptive AI claims are an active enforcement target, not a hypothetical risk.
FTC - Final order in DoNotPay “AI lawyer” deceptive-claim case
https://www.ftc.gov/news-events/news/press-releases/2025/02/ftc-finalizes-order-donotpay-prohibits-deceptive-ai-lawyer-claims-imposes-monetary-relief-requires
Published: 2025-02-03 | Updated: 2026-05-07
Use: Concrete enforcement outcome and monetary relief
Used for $193,000 monetary relief and restrictions on unsupported AI capability claims.
NIST AI 600-1 - AI Risk Management Framework: Generative AI Profile
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=958388
Published: 2024-07-26 | Updated: 2026-05-07
Use: Operational governance controls for GenAI deployment
Used for controls such as legal alignment, adversarial testing, provenance tracking, and incident disclosure.
Extend from examples to full-funnel execution.
Turn one sales and marketing brief into positioning, outreach, follow-up, and KPI actions.
Generate prospecting sequences and response-handling playbooks.
Align team messaging standards and cadence checkpoints.
Coordinate demand generation and sales execution from one plan.
Design multi-step agent workflows for sales execution tasks.
Convert sales and marketing use cases into role-play and training assets.