QR Code for this page

Impact on Work

How agentic AI transforms work from tool mastery to outcome orchestration.

The Fundamental Shifts We're Observing

Work is transforming as agency shifts from scarcity to abundance.

Agency Becomes Abundant

Agency is no longer unique to humans. AI agents make decisions, create multi-step plans, and execute autonomously.


Transaction costs collapse

Transaction Cost Collapse

Source: "The Coasean Singularity? Demand, Supply, and Market Design with AI Agents"

Shahidi, Rusak, Manning, Fradkin, Horton (MIT/Harvard/NBER, 2025)

"AI agents are poised to transform digital markets by dramatically reducing transaction costs—the expenses associated with using markets to coordinate economic activity. The activities that comprise transaction costs—learning prices, negotiating terms, writing contracts, and monitoring compliance—are precisely the types of tasks that AI agents can potentially perform at very low marginal cost."

Decision-Making Shifts in Time

Then: Encode all business rules at design time.


Now: Apply judgment based on current circumstances.

Example: Sales Outreach

Design-time rules (traditional):

IF lead_score > 80 THEN priority = "high"
IF industry = "tech" THEN use_template_B

Execution-time judgment (agentic):

Agent assesses real-time context:
• Lead viewed pricing page 3 times today
• Company just posted "Director of Operations" job
• LinkedIn shows 15 new hires this month

Agent discovers: ROI calculator tool available in system
Agent verifies: Tool uses only public data, safe to deploy
Agent decides: "Perfect moment to send ROI calculator, despite lead score being only 65. They're clearly scaling."

Workflows Transform from Procedural to Declarative

Traditional: Define every step, outcome implicit.

Agentic: Define intent and constraints, let agents determine execution.

Procedural vs Declarative Workflows

What humans specify:

Before: Every step, branch, exception, failure mode

After: Intent, boundaries, verification criteria

What agents figure out:
• What tools to use
• When to hand over to another agent or human
• Coordination of steps
• Escalation timing
• How to recover from failures

The Bottleneck Flips: From Execution to Attention

The shift: Execution scales infinitely with agents. Human attention doesn't.

Traditional Economy Logic

Constraint
Human execution capacity
Optimization
Maximize outcome per person
Management Focus
Time management and coordination
Organizational Structure
Hierarchy (coordinate scarce execution)

Agent Economy Logic

Constraint
Human attention capacity
Optimization
Maximize what agents accomplish per unit of human attention
Management Focus
Attention management
Organizational Structure
Self-organizing human-AI agent networks that form around outcomes, and later dissolve

Coasean Theory

When transaction costs approach zero, organizational boundaries transform.

"Transaction costs play a central role in shaping organizations. Much of how we structure our economy and firms can be explained by transaction costs, often costs of human labor."
— Coase (1937, Nobel Prize 1991)
"Once agents can execute these functions effectively and cheaply, we will see significant shifts in the traditional make-or-buy boundaries that define firm organization and market structure."
— MIT 2025 Extension

Simon won Nobel Prize (1978) showing information processing limits drive organizational hierarchy. When agents eliminate these limits, the rationale for hierarchy is challenged.

Skills That Matter in the Agentic Era

As AI learns to use tools better than humans, tool mastery is declining in value. These capabilities are rising in importance:

Intent Precision

Can you precisely define what success looks like and why it matters?

When execution was the bottleneck, rough direction was enough—humans figured out details during execution. When agents execute, vague intent leads to precise optimization of the wrong outcome.

"The better you are at delegating, or the better you learn to delegate, the better you will be able to manage a horde of agents."
— Matthias Patzak, Executive in Residence (CTO), AWS

Agency Placement

Can you decide which areas need low or high agency?

This IS the attention allocation decision: where should agents act autonomously vs. require human judgment? Poor placement means either bottlenecked workflows or runaway agents.

Trust Boundary Definition

Can you set boundaries that enable autonomy while ensuring safety?

Trust boundaries depend on context: risk level, reversibility, scale, scope, and special conditions (working with protected groups, compliance with regulations).


Dynamic boundaries that expand and contract based on agent behavior.


Requires real-time instrumentation to detect trust signals and automated boundary adjustment mechanisms.

Trust Dynamics

Trust earners (expand boundaries):
  • Consistent accuracy over time
  • Appropriate escalations
  • Good judgment calls in unknown situations
  • Transparent reasoning
  • Learning from feedback
Trust destroyers (contract immediately):
  • Hallucinations or fabricated information
  • Failing to escalate when needed
  • Repeating mistakes
  • Attempting to exceed boundaries
  • Security incidents, biased decisions
Clear boundaries = agents act autonomously
Unclear boundaries = constant human intervention
Fixed boundaries = wasted potential or catastrophic risk

Capability Networks Architecture

Can you design hybrid human-agent networks that self-orchestrate around outcomes?

Poor architecture: Coordination overhead consumes your attention, agents in the way

Good architecture: Agents coordinate themselves, surface only exceptions, agents integrate to team seamlessly

Alignment Verification

Can you recognize when agents drift from intended outcomes?

Requires sustained attention to monitor outcomes, not activity. You're verifying the WHAT (did we achieve the goal?) not the HOW (did they follow the process?).

Judgment Quality

Can you make high-quality decisions under ambiguity where AI can't?

When agents escalate, your judgment must be worth the interruption. You become the exception handler for complex, ambiguous, high-stakes decisions.


The opportunity: Before, there wasn't always space for outliers. Now outliers can get better treatment because we created space for them in human attention.

5 Phases of Work Evolution in the AI Era

Understanding where we are and where we're heading. Different domains progress at different speeds.

Phase 1: 2015-2021

Tool Assistance

AI reduces friction with autocomplete and recommendations

Human Role Operates tools
Bottleneck Tool proficiency
Verification Verify every character
Verification Scope Every output
Phase 2: 2022-2023

Artifact Generation

AI produces complete artifacts from specifications

Human Role Specifies artifacts
Bottleneck Specification clarity
Verification Review and edit
Verification Scope Full artifacts
Phase 3: 2023-2025

Process Execution

AI executes multi-step workflows

Human Role Designs workflows
Bottleneck Process design knowledge
Verification Review the diff
Verification Scope Key changes
Phase 4: 2025-2026 ← We're Here

Outcome Orchestration

AI discovers and orchestrates capabilities to achieve outcomes

Human Role Describes outcomes + constraints
Bottleneck Outcome clarity + alignment
Verification Alignment monitoring
Verification Scope Spot checks
Phase 5: Future (Speculative)

Intent Alignment

AI infers and pursues outcomes from high-level intent

Human Role Holds intent + values
Bottleneck Value alignment
Verification Values auditing
Verification Scope Periodic audits

HR as Orchestration Lab

Lead by example, concrete use cases where HR can deploy agentic AI.

1

Autonomous Recruiting Research

Identify candidates who match outcome-capability profiles

How It Works

  • Agent analyzes GitHub, LinkedIn, blogs, conference talks
  • Evaluates "demonstrates orchestration thinking" not just "has skill X"
  • Generates evidence-based profiles with personalized outreach

What HR Learns

How to define outcomes clearly enough for agents to act. Where human judgment remains essential (culture fit, final assessment).

2

Onboarding Orchestration Agent

Every new hire achieves "productive contributor" status in their domain within X weeks

How It Works

  • Agent monitors new hire's progress across multiple systems (LMS, calendar, Slack, GitHub/work systems)
  • Identifies blockers: "You haven't gotten access to system X" or "You haven't met with teammate Y yet"
  • Autonomously resolves what it can (creates tickets, sends reminders, schedules intros)
  • Surfaces to humans only what needs judgment ("Hire seems stuck on concept Z, might need 1:1")
  • Adapts the path based on role, experience level, learning speed

What HR Learns

Multi-agent coordination (learning agent + access agent + social agent + progress monitor). Human-in-loop design (when to escalate vs handle). Continuous improvement (agent gets better at onboarding over time).

Credibility Builder

Time to productivity dropped 40%, and new hires report better experience

3

Compensation Analysis

Ensure fair, market-aligned compensation

How It Works

  • Agent continuously monitors market data
  • Identifies compensation gaps before retention risks
  • Models scenarios with budget impact
  • Recommends proactive adjustments with data

What HR Learns

How to define fairness as an outcome. How to trust agent recommendations with spot-check verification.

4

Performance Review Synthesis Agent

Managers spend time on judgment, not data gathering

How It Works

  • Agent collects 360 feedback, project outcomes, goals progress, peer reviews
  • Synthesizes patterns: "Consistently praised for X, feedback suggests growth area in Y"
  • Pulls quantitative data: code shipped, deals closed, support tickets, etc.
  • Generates draft review highlighting: outcomes achieved, orchestration quality observed, areas for development
  • Manager reviews, adds judgment/context, discusses with employee

What HR Learns

Data synthesis at scale. How AI surfaces insights humans might miss. Where AI fails (nuance, context, sensitive situations). The shift from "manager writes review" to "manager validates and contextualizes AI synthesis".

Credibility Builder

Managers report 5 hours saved per review cycle, better data-driven conversations

5

Exit Interview Synthesis & Action Agent

Systematically address root causes of attrition, not just collect exit interview data

How It Works

  • Agent conducts exit interviews (some people more honest with AI)
  • Synthesizes across exits: "Last 5 departures in engineering all mentioned unclear outcome expectations"
  • Cross-references with engagement data, performance reviews, manager feedback
  • Generates action recommendations: "Consider manager training on outcome clarity for team X"
  • Tracks whether actions reduce similar exits

What HR Learns

Pattern detection across qualitative data. Closing the loop (analysis → action → measurement). Sensitive data handling. When humans still need to do interviews (senior leaders, complex situations).

Credibility Builder

Attrition in team X dropped after AI identified root cause pattern

6

Candidate Experience Agent

Every candidate has excellent experience regardless of hiring decision

How It Works

  • Agent keeps candidates informed (status updates, next steps)
  • Answers questions about role, team, company
  • Schedules interviews across multiple calendars
  • Sends prep materials personalized to interview stage
  • Collects feedback and adapts ("Candidates prefer video intros to text descriptions")
  • Handles rejections professionally with specific feedback
  • Nurtures pipeline for future roles

What HR Learns

Multi-touchpoint orchestration. Personalization at scale. Feedback loops and continuous improvement. Balancing automation with human touch (where do humans add value?).

Credibility Builder

Candidate NPS increased 40 points, even for rejected candidates

7

Multi-Modal & Multi-Lingual HR Access Agent

Employees access HR services through any interface and language they prefer

How It Works

  • Agent exposes HR capabilities through multiple interfaces: traditional HR portal, employee's personal AI agent, voice interface, Slack/Teams
  • Adapts to employee's language preference automatically across all channels
  • Handles requests like "Update my address" or "How much PTO do I have?" regardless of channel or language
  • Personal AI agents can add context: "Book PTO for my daughter's graduation" (agent knows the date from calendar)
  • Voice interface enables hands-free access in employee's preferred language: "Submit my expense report from last week"
  • Maintains security and compliance across all channels and languages
  • Learns preferences: "This employee prefers Spanish, uses voice while commuting, Slack during work hours"

What HR Learns

Interface-agnostic and language-agnostic service design. How to expose capabilities for agent consumption (not just human UIs). Security models that work across channels. Accessibility through choice (not just compliance).

Credibility Builder

HR portal usage drops 60% while employee satisfaction with HR services increases 35%, with highest satisfaction gains among non-native English speakers

8

Internal Mobility & Talent Redeployment Agent

Match employees to opportunities in near real-time based on skills, aspirations, and project needs

How It Works

  • Agent continuously monitors project needs, skill gaps, and capacity across the organization
  • Analyzes employee profiles: skills, learning trajectory, career aspirations, past performance
  • Identifies opportunities in real-time: permanent roles and short-term project assignments
  • Proactively notifies employees: "Based on your Python skills and interest in ML, there's a 6-week project in the AI team"
  • Tracks outcomes to improve matching quality over time

What HR Learns

Real-time talent marketplace design. Moving from annual mobility reviews to continuous matching. Skills inference from work patterns. Balancing employee aspirations with business needs.

Credibility Builder

Internal mobility increased 40%, time-to-fill for critical roles reduced by 50%

9

HR Policy & Compliance as a Service

HR policies and legal frameworks become accessible to AI agents across the organization, enabling compliance by design

How It Works

  • HR policies, principles, and legal requirements are exposed as consumable services for other teams' AI agents
  • Engineering team's deployment agent checks: "Can I schedule this release during these hours given on-call rotation policies?"
  • Sales team's commission agent verifies: "Is this commission structure compliant with compensation equity policies?"
  • Policies update centrally, all integrated agents reflect changes immediately

What HR Learns

Making compliance a service, not a gate. Designing for agent consumption alongside human consumption. Policies become contextual and embedded in workflows, not periodic training events.

Credibility Builder

Policy violations drop 70%, compliance questions to HR decrease 60%, other teams integrate HR rules without manual coordination

10

Expert Knowledge Attribution & Impact Tracking

Experts' encoded knowledge becomes scalable and reusable, with clear attribution and impact measurement for recognition and career advancement

How It Works

  • Experts encode their specialized knowledge, judgment patterns, and decision frameworks
  • When agents use this expertise, the system tracks attribution and impact
  • Sales expert's negotiation framework: "Used in 50 deals this quarter, contributing to $2M in closed revenue"
  • Engineering expert's architecture patterns: "Applied in 12 projects, reducing deployment issues by 40%"
  • Customer success expert's escalation playbook: "Resolved 200 cases, improved satisfaction scores by 15%"
  • Impact metrics feed into performance reviews and promotion decisions
  • Recognition shifts from "how much work did you do?" to "how much did your expertise enable others?"

What HR Learns

Measuring knowledge contribution at scale. Creating incentives for knowledge sharing vs hoarding. Redefining expertise value from individual execution to organizational impact. New promotion criteria based on knowledge leverage.

Credibility Builder

Knowledge sharing increased 60%, junior team members achieve senior-level outcomes 3x faster, experts promoted based on organizational impact not just individual output

Why AI Will Use Tools Better Than Humans

The promise is compelling, but the reality is more nuanced. Here's what we know today.

Human Tool Usage

  • Learning curve: days to months
  • Limited repertoire: 5-10 tools per domain
  • Tool lock-in: high switching costs
  • Cognitive load: limited working memory
  • Consistency varies by fatigue and context
  • Knowledge decay without regular use
  • Must actively seek best practices
  • Can only use one tool effectively at a time

AI Tool Usage: The Promise

  • Faster proficiency with complete documentation
  • Unlimited repertoire at runtime
  • Dynamic tool selection and adaptation
  • Runtime discovery of new tools
  • Automatic tool composition
  • Swaps tools when requirements change
  • Processes entire knowledge base
  • Never forgets procedures
  • Collective learning across agents

AI Tool Usage: The Reality

  • Not instant—takes multiple attempts
  • Too many tools degrades quality
  • Tool quality and trust are challenges
  • Runtime discovery is also a vulnerability
  • Some workflows should remain static
  • Reputation and T&C must be assessed
  • Without guidance, misses what's important
  • Context overload buries critical info
  • Bias amplification across agents

The Trajectory

As agents evolve, humans won't match AI's tool proficiency through training. Tool lock-in becomes a human constraint while agents adapt continuously. The traditional "get better at using tools" career path is becoming less relevant. The shift: from "hire people who know Tool X" to "hire people who can define outcomes, guard clarity of intent, and exercise judgment in situations where AI can't."

Critical Gaps & Uncertainties

What we don't know yet and what could go wrong.

Timeline Uncertainty

  • Early movers experimenting now
  • Regulatory frameworks could slow adoption

Evidence Gaps

  • Based on early patterns and extrapolation
  • Need more case studies of what works

Capability Plateau Risk

  • Current agents require significant oversight
  • Capabilities could plateau before full autonomy

Workforce Displacement

  • Not everyone can transition to intent definition
  • Could be more elitist than tool-based work

Organizational Inertia

  • Political dynamics deeply entrenched
  • Most transformation initiatives fail

Misalignment at Scale

  • "You get what you measure" amplified
  • Accountability gaps in autonomous decisions