How agentic AI transforms work from tool mastery to outcome orchestration.
Work is transforming as agency shifts from scarcity to abundance.
Agency is no longer unique to humans. AI agents make decisions, create multi-step plans, and execute autonomously.
Transaction costs collapse
Source: "The Coasean Singularity? Demand, Supply, and Market Design with AI Agents"
"AI agents are poised to transform digital markets by dramatically reducing transaction costs—the expenses associated with using markets to coordinate economic activity. The activities that comprise transaction costs—learning prices, negotiating terms, writing contracts, and monitoring compliance—are precisely the types of tasks that AI agents can potentially perform at very low marginal cost."
Then: Encode all business rules at design time.
Now: Apply judgment based on current circumstances.
Design-time rules (traditional):
IF lead_score > 80 THEN priority = "high"
IF industry = "tech" THEN use_template_B
Execution-time judgment (agentic):
Agent assesses real-time context:
• Lead viewed pricing page 3 times today
• Company just posted "Director of Operations" job
• LinkedIn shows 15 new hires this month
Agent discovers: ROI calculator tool available in system
Agent verifies: Tool uses only public data, safe to deploy
Agent decides: "Perfect moment to send ROI calculator, despite lead score being only 65. They're clearly scaling."
Traditional: Define every step, outcome implicit.
Agentic: Define intent and constraints, let agents determine execution.
What humans specify:
Before: Every step, branch, exception, failure mode
After: Intent, boundaries, verification criteria
What agents figure out:
• What tools to use
• When to hand over to another agent or human
• Coordination of steps
• Escalation timing
• How to recover from failures
The shift: Execution scales infinitely with agents. Human attention doesn't.
When transaction costs approach zero, organizational boundaries transform.
Simon won Nobel Prize (1978) showing information processing limits drive organizational hierarchy. When agents eliminate these limits, the rationale for hierarchy is challenged.
As AI learns to use tools better than humans, tool mastery is declining in value. These capabilities are rising in importance:
Can you precisely define what success looks like and why it matters?
When execution was the bottleneck, rough direction was enough—humans figured out details during execution. When agents execute, vague intent leads to precise optimization of the wrong outcome.
Can you decide which areas need low or high agency?
This IS the attention allocation decision: where should agents act autonomously vs. require human judgment? Poor placement means either bottlenecked workflows or runaway agents.
Can you set boundaries that enable autonomy while ensuring safety?
Trust boundaries depend on context: risk level, reversibility, scale, scope, and special conditions (working with protected groups, compliance with regulations).
Dynamic boundaries that expand and contract based on agent behavior.
Requires real-time instrumentation to detect trust signals and automated boundary adjustment mechanisms.
Can you design hybrid human-agent networks that self-orchestrate around outcomes?
Poor architecture: Coordination overhead consumes your attention, agents in the way
Good architecture: Agents coordinate themselves, surface only exceptions, agents integrate to team seamlessly
Can you recognize when agents drift from intended outcomes?
Requires sustained attention to monitor outcomes, not activity. You're verifying the WHAT (did we achieve the goal?) not the HOW (did they follow the process?).
Can you make high-quality decisions under ambiguity where AI can't?
When agents escalate, your judgment must be worth the interruption. You become the exception handler for complex, ambiguous, high-stakes decisions.
Understanding where we are and where we're heading. Different domains progress at different speeds.
AI reduces friction with autocomplete and recommendations
AI produces complete artifacts from specifications
AI executes multi-step workflows
AI discovers and orchestrates capabilities to achieve outcomes
AI infers and pursues outcomes from high-level intent
Lead by example, concrete use cases where HR can deploy agentic AI.
Identify candidates who match outcome-capability profiles
How to define outcomes clearly enough for agents to act. Where human judgment remains essential (culture fit, final assessment).
Every new hire achieves "productive contributor" status in their domain within X weeks
Multi-agent coordination (learning agent + access agent + social agent + progress monitor). Human-in-loop design (when to escalate vs handle). Continuous improvement (agent gets better at onboarding over time).
Time to productivity dropped 40%, and new hires report better experience
Ensure fair, market-aligned compensation
How to define fairness as an outcome. How to trust agent recommendations with spot-check verification.
Managers spend time on judgment, not data gathering
Data synthesis at scale. How AI surfaces insights humans might miss. Where AI fails (nuance, context, sensitive situations). The shift from "manager writes review" to "manager validates and contextualizes AI synthesis".
Managers report 5 hours saved per review cycle, better data-driven conversations
Systematically address root causes of attrition, not just collect exit interview data
Pattern detection across qualitative data. Closing the loop (analysis → action → measurement). Sensitive data handling. When humans still need to do interviews (senior leaders, complex situations).
Attrition in team X dropped after AI identified root cause pattern
Every candidate has excellent experience regardless of hiring decision
Multi-touchpoint orchestration. Personalization at scale. Feedback loops and continuous improvement. Balancing automation with human touch (where do humans add value?).
Candidate NPS increased 40 points, even for rejected candidates
Employees access HR services through any interface and language they prefer
Interface-agnostic and language-agnostic service design. How to expose capabilities for agent consumption (not just human UIs). Security models that work across channels. Accessibility through choice (not just compliance).
HR portal usage drops 60% while employee satisfaction with HR services increases 35%, with highest satisfaction gains among non-native English speakers
Match employees to opportunities in near real-time based on skills, aspirations, and project needs
Real-time talent marketplace design. Moving from annual mobility reviews to continuous matching. Skills inference from work patterns. Balancing employee aspirations with business needs.
Internal mobility increased 40%, time-to-fill for critical roles reduced by 50%
HR policies and legal frameworks become accessible to AI agents across the organization, enabling compliance by design
Making compliance a service, not a gate. Designing for agent consumption alongside human consumption. Policies become contextual and embedded in workflows, not periodic training events.
Policy violations drop 70%, compliance questions to HR decrease 60%, other teams integrate HR rules without manual coordination
Experts' encoded knowledge becomes scalable and reusable, with clear attribution and impact measurement for recognition and career advancement
Measuring knowledge contribution at scale. Creating incentives for knowledge sharing vs hoarding. Redefining expertise value from individual execution to organizational impact. New promotion criteria based on knowledge leverage.
Knowledge sharing increased 60%, junior team members achieve senior-level outcomes 3x faster, experts promoted based on organizational impact not just individual output
The promise is compelling, but the reality is more nuanced. Here's what we know today.
As agents evolve, humans won't match AI's tool proficiency through training. Tool lock-in becomes a human constraint while agents adapt continuously. The traditional "get better at using tools" career path is becoming less relevant. The shift: from "hire people who know Tool X" to "hire people who can define outcomes, guard clarity of intent, and exercise judgment in situations where AI can't."
What we don't know yet and what could go wrong.