v0.2.0ยทMarch 2026

Directed Systems|

A Practitioner Paper

A framework for reading AI systems that direct work before they fully replace workers.

Initiative shifts into the systemConsequence remains with the humanLearning diverges over time
Read with a frame
Reader's frame0% through the paper
  1. AI is reorganizing work before it fully replaces workers.
  2. The key mismatch is between machine initiative and human consequence.
  3. The right unit of analysis is the workflow, not the headline job title.

Abstract

This paper argues that a significant share of current AI adoption is not best described as either full automation or simple assistance. A growing set of workflows fits a different pattern: AI systems increasingly shape priorities, rankings, drafts, recommendations, and next actions, while humans retain execution, exception handling, explanation, and accountability. I refer to this configuration as a directed system.

The paper makes a bounded analytical contribution. It proposes that directed systems can be identified through three diagnostic properties: shifted initiative, retained human consequence, and asymmetric learning. The first means that initiative over what should happen next moves into the system. The second means that responsibility for execution, legitimacy, or failure remains with humans. The third means that the system learns from accumulating use while humans often lose repeated exposure to the routine work that previously built skill. Taken together, these properties help connect current AI workflows to older automation problems described by Bainbridge, Dekker and Woods, Elish, and the broader literature on levels of automation.

Recent evidence from NBER, Stanford, Anthropic, the World Economic Forum, and MIT suggests that AI use is already widespread, task exposure is measurable, worker preferences often favor more human agency than technical capability alone would imply, and early labor-market effects may already be visible. The paper does not claim novel empirical discovery. It offers a practitioner synthesis and a diagnostic frame intended to make current AI adoption easier to analyze, design, and critique.

Thesis in Brief

If the argument works, the reader should leave with three claims.

  1. A meaningful share of current AI adoption is reorganizing work before it fully replaces workers.
  2. The central risk is not only automation. It is the growing mismatch between where initiative sits and where consequence remains.
  3. The most useful way to inspect an AI-mediated workflow is to ask three questions: who sets direction, who carries consequence, and who continues to learn.

The term directed system is useful only if it makes those claims easier to see in concrete cases.

1. Introduction

Discussion about AI and work is still often organized around a binary question: will AI replace people, or will it assist them? That framing is no longer sufficient for a large class of real deployments.

In many workflows, AI is no longer only a passive tool that waits for human instruction. At the same time, it is not operating autonomously from end to end. Instead, it shapes what should happen next. It ranks cases, drafts outputs, routes requests, suggests actions, flags exceptions, and evaluates candidate responses. Humans then execute, verify, explain, absorb edge cases, and carry the consequences when the system fails.

This paper uses the term directed system for that configuration: a work arrangement in which AI exerts meaningful directional influence over the flow of work while humans retain operational responsibility.

The claim is not that this pattern is universal, nor that it is entirely new in principle. Similar arrangements have existed in earlier automation regimes. The claim is that contemporary generative and predictive AI are making this pattern more common across domains that were previously less formalized, less automated, or less visible as automation targets.

That matters because the central design question changes. The issue is no longer only what AI can do. The issue is what happens when initiative moves into the system faster than consequence moves away from the human.

2. Contribution

This paper does not offer a new dataset. Its contribution is analytical.

It does three things:

  1. It proposes "directed systems" as a useful term for a recurring configuration in current AI adoption.
  2. It defines that configuration through three diagnostic properties rather than through loose metaphor.
  3. It links those properties to established literature on automation, substitution, and responsibility transfer.

This matters because current debate often oscillates between replacement narratives and generic collaboration rhetoric. Both are limited. Replacement narratives understate how often humans remain inside the chain of consequence. Collaboration rhetoric often fails to ask who actually sets direction, who bears failure, and who continues to learn.

The point of the term is to force those questions into the foreground.

3. Relation to Prior Literature

The argument sits inside an older conversation rather than outside it.

The literature on levels of automation, including Sheridan and Verplank, already showed that control is not all-or-nothing. Systems can take over selection, recommendation, timing, or execution in different combinations. What current AI changes is the scale and scope of those combinations across ordinary knowledge work.

Bainbridge's "ironies of automation" remain directly relevant. When systems take over regular cases, humans are left with exceptions. Monitoring becomes difficult. Skills decay because people no longer practice the tasks they are expected to perform under failure. These are not historical curiosities. They describe a predictable risk inside many AI-mediated workflows.

Dekker and Woods critique what they call the substitution myth: the assumption that a machine can take over a task without changing the surrounding system. The term directed systems is useful here because it describes what that surrounding change looks like when AI takes on more initiative without fully absorbing consequence.

Elish's "moral crumple zone" adds a further point. In some socio-technical systems, humans remain the visible site of blame even when meaningful control has migrated elsewhere. Directed systems do not always produce that outcome, but they create favorable conditions for it when responsibility remains human while initiative and opacity increase.

The concept proposed here should therefore be read as a synthesis across those lines of work, updated for contemporary AI deployment rather than presented as a new scientific category.

4. Three Diagnostic Properties of Directed Systems

The paper's main analytical claim is that directed systems can be identified through three properties. Not every AI-enabled workflow will display all three. The framework is most useful when they appear together.

4.1 Shifted Initiative

The first property is shifted initiative. The system does not merely respond to explicit commands. It meaningfully shapes what should happen next.

This can take several forms:

  • ranking or triaging inputs
  • proposing the next task or action
  • drafting outputs that become the default starting point
  • flagging anomalies that determine where attention goes
  • evaluating candidate responses before humans intervene

Shifted initiative matters because it changes the locus of practical control. Even if the human remains formally "in charge," the system increasingly determines the menu of actions, the order of attention, and the baseline from which decisions proceed.

4.2 Retained Human Consequence

The second property is retained human consequence. The human remains responsible for execution, legitimacy, explanation, recovery, or blame.

This does not always mean legal liability in a narrow sense. It can also mean that the human is still the person who:

  • communicates the decision to another human
  • handles the exception when the system fails
  • absorbs customer or stakeholder reaction
  • signs off on an action in a regulated or high-stakes process
  • carries reputational or organizational cost when outcomes are poor

This property is what distinguishes directed systems from stronger forms of autonomy. Initiative may move into the system, but consequence does not move proportionally.

4.3 Asymmetric Learning

The third property is asymmetric learning. The system improves, or is at least updated, through repeated exposure across many cases. The human, by contrast, may lose repeated exposure to routine cases because the system now handles or shapes them.

This matters because routine work has often been the substrate through which skill develops. When the system absorbs early repetitions, the human may be left with fewer opportunities to build intuition, diagnose anomalies, or recognize subtle failure patterns. The result is not simple replacement. It is uneven capability accumulation.

This property connects directly to Bainbridge's concern about skill decay and to contemporary worries about junior workers losing the tasks through which they previously learned the craft.

Taken together, shifted initiative, retained human consequence, and asymmetric learning generate the distinctive tension of directed systems. Initiative migrates. Responsibility remains. Learning diverges.

Viewed against earlier literature, the three properties also map cleanly onto older concerns: shifted initiative connects to levels-of-automation analysis, retained human consequence connects to authority and blame allocation, and asymmetric learning connects to skill decay and degradation of human backup capacity.

5. What Current Evidence Shows

The evidence base is still developing, but it is already sufficient to support a few claims.

5.1 Adoption Is Broad Enough to Matter

NBER survey work shows unusually rapid adoption of generative AI. By late 2024, nearly 40% of U.S. adults ages 18-64 had used generative AI. Among employed respondents, 23% had used it for work in the prior week and 9% used it every work day.

This does not prove organizational transformation, but it does show that usage is no longer confined to early adopters or technical specialists.

5.2 Task Exposure Is Measurable

Stanford's WORKBank project reports that AI tools are already used for at least 25% of tasks in 36% of occupations. MIT's Project Iceberg estimates that current AI capabilities overlap with 11.7% of U.S. labor-market wage value. Those figures should not be read as direct displacement estimates. They are measures of exposure and capability overlap, not completed labor-market outcomes.

Their significance lies elsewhere: they show that the relevant unit of change is often the task bundle, not the formal job title.

5.3 Directional Use Appears to Be Increasing

Anthropic's 2025 Economic Index suggests that automation-oriented and directive uses of AI are rising, especially in business API contexts. That matters for this paper because the issue is not just how often AI is used, but whether it is being used to shape workflow direction rather than only to support isolated tasks.

5.4 Human Agency Remains a Design Variable

WORKBank is particularly important because it does not collapse automation and augmentation into a single measure. Its findings indicate that workers in many occupations prefer a collaborative model rather than full machine control. Equal partnership was the dominant worker-preferred level in 47 of 104 occupations analyzed, and on 47.5% of tasks workers preferred higher levels of human agency than experts judged technologically necessary.

That gap suggests that capability alone does not settle the design question. Acceptable control and technically possible control are not identical.

5.5 Early-Career Effects Deserve Attention

Stanford Digital Economy Lab's "Canaries in the Coal Mine?" reports a 16% relative decline in employment for early-career workers ages 22-25 in the most AI-exposed occupations. This is not decisive evidence of generalized displacement. It is, however, a plausible warning sign for the asymmetric learning problem described earlier.

If systems absorb a large share of beginner work, they may also weaken the progression path through which workers acquire judgment and domain fluency.

5.6 Productivity Evidence Needs Interpretation

The evidence on productivity gains should not be ignored. In a field study of 5,179 customer support agents, access to a generative AI assistant increased productivity by 14% on average, with substantially larger gains for novice and lower-skilled workers.

This shows that AI can improve throughput and performance in bounded settings. It does not by itself resolve the questions of accountability, deskilling, exception handling, or authority allocation. Productivity gains and work design problems can coexist.

6. Case Sketches

The concept is easier to evaluate when applied to concrete workflow types.

6.1 Customer Support

In support environments, AI can classify incoming tickets, retrieve suggested answers, draft responses, and prioritize queues. That is shifted initiative. The support worker still manages difficult customers, escalations, ambiguity, and the social consequences of a poor answer. That is retained human consequence. If the system absorbs standard tickets, novice workers may see fewer routine cases and learn the product or the failure patterns more slowly. That is asymmetric learning.

6.2 Software Engineering

In software work, models can generate code, tests, refactors, and implementation plans. This does not eliminate the engineer. It changes where the engineer spends effort: integration, verification, debugging, architecture, and failure analysis. If routine coding moves into the model while human responsibility for correctness remains, the workflow becomes directional without becoming autonomous. Junior engineers may face a narrower ladder if they are asked to review or patch model output before they have built enough underlying intuition.

6.3 Operational or Clinical Triage

In operations and clinical settings, systems can rank cases, flag anomalies, estimate urgency, and propose action order. Humans may still explain outcomes, perform interventions, handle exceptions, or bear institutional blame for errors. This is precisely where the distinction between initiative and consequence matters. A system can influence priority order very strongly while leaving the human fully exposed when a low-frequency but high-cost case is handled badly.

These cases are not proofs. They are clarifying examples. Their purpose is to show that the framework identifies a pattern across domains without assuming every domain is identical.

7. Objections and Boundary Conditions

A useful framework should survive obvious objections.

7.1 "Is This Just Automation by Another Name?"

Not exactly. The term is useful only if it captures a specific mismatch: initiative moving into the system without a proportional transfer of consequence away from the human. Many forms of automation do not produce that exact pattern. Some remove the human more fully. Others leave AI in a purely assistive role. Directed systems are a subset, not a synonym.

7.2 "Do All Good Tools Shape Behavior?"

Yes. Many tools shape behavior. The question is degree. The concept becomes useful when system output is not just helpful, but materially organizes attention, sequencing, or defaults in a way that changes who decides what next.

7.3 "Is the Framework Too Broad?"

It can become too broad if applied loosely to every human-AI workflow. That is why the three diagnostic properties matter. A system that lacks shifted initiative, retained human consequence, or asymmetric learning may still matter, but it is not the core case this paper is trying to describe.

7.4 "Does This Understate Full Replacement Risk?"

No. Full replacement remains a real possibility in some domains. The argument here is narrower. It is that a large share of the near-term transition is likely to pass through this intermediate configuration, and that this intermediate configuration has design problems of its own.

8. Implications

8.1 For Analysis of Work

The most informative unit of analysis is often neither the model benchmark nor the job title. It is the allocation of initiative, consequence, and learning across the workflow.

That shifts the analytical question from "Can AI do this task?" to a more informative set of questions:

  • Who sets direction?
  • Who carries consequence?
  • Who continues to learn?

Those questions make many current deployment debates easier to interpret.

8.2 For Workers

Workers are likely to retain more leverage in roles where judgment, legitimacy, stakeholder handling, exception management, and real-world coordination remain central. The implication is not that routine work vanishes immediately. It is that routine work becomes a weaker basis for long-term advantage when system initiative increases.

8.3 For Managers and Founders

Managers should treat deployment readiness as an organizational design question rather than a model-performance question. Capability is not enough. The relevant questions are practical:

  • Who intervenes when output quality is poor?
  • How is failure detected?
  • What happens to onboarding and apprenticeship if routine cases are absorbed?
  • Who remains exposed when the system operates with opaque defaults?

An adoption strategy is incomplete if those questions remain unanswered.

8.4 For System Designers

Design should not always target maximal autonomy. In many workflows, the more stable target is a workable division of labor with preserved explanation paths, inspectability, override capacity, and continued human skill formation.

A serious design error may be leaving humans with too much residual responsibility and too little residual agency.

9. Limits

This paper makes a framing argument, not a full labor-market theory.

Several limits should be stated clearly:

  • The evidence base is recent and still shifting.
  • Task exposure is not the same as displacement.
  • Some parts of the argument are interpretive syntheses rather than direct outputs of a single study.
  • The proposed diagnostic is intended as a practitioner lens, not a settled scientific taxonomy.
  • The framework is likely to be most useful in knowledge work and mixed socio-technical workflows, not every sector equally.

These limits define the scope of the paper. They are not a reason to ignore the pattern it identifies.

10. Conclusion

Much of the current AI transition is not well described by a simple opposition between assistance and replacement. A growing share of deployments appears to fit a different pattern: initiative moves into the system, humans retain consequence, and learning becomes uneven.

That pattern is what this paper calls a directed system.

The point is not just classificatory. The concept is useful only if it changes what the reader looks at. Instead of asking only whether AI can perform a task, the more revealing questions are simpler: who sets direction, who carries consequence, and who continues to learn.

If those three dimensions separate too far, the workflow may be more fragile than it first appears. A system can look efficient while quietly concentrating initiative in the machine, leaving responsibility with the human, and weakening the learning path that once produced competent operators.

That is the paper's narrow claim. Much of the near-term transition is likely to be shaped less by full replacement than by this redistribution problem inside everyday work. That claim is narrower than many future-of-work arguments. It is also, in the current phase of adoption, a more useful one.

Author's Note and Disclosure

This document is a practitioner paper, not a peer-reviewed academic publication. It was researched, drafted, and edited by Petru Arakiss with AI assistance, including Anthropic's Claude Code and OpenAI Codex. The framing, synthesis, final judgment, and responsibility for errors remain the author's.

References

  • Anthropic. (2025). Anthropic Economic Index: Tracking AI's role in the U.S. and global economy.
  • Bainbridge, L. (1983). Ironies of Automation. Automatica, 19(6), 775-779.
  • Bick, A., Blandin, A., and Deming, D.J. (2024). The Rapid Adoption of Generative AI.
  • Brynjolfsson, E., Chandar, B., and Chen, R. (2025). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.
  • Brynjolfsson, E., Li, D., and Raymond, L.R. (2023). Generative AI at Work.
  • Chopra, A. et al. (2026). Project Iceberg: The Iceberg Index: Measuring Skills-centered Exposure in the AI Economy.
  • Dekker, S.W.A. and Woods, D.D. (2002). MABA-MABA or Abracadabra? Cognition, Technology and Work.
  • Elish, M.C. (2019). Moral Crumple Zones. Engaging Science, Technology, and Society.
  • Fitts, P.M. (1951). Human engineering for an effective air navigation and traffic control system.
  • Shao, Y. et al. (2025). Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce.
  • Sheridan, T.B. and Verplank, W.L. (1978). Human and Computer Control of Undersea Teleoperators.
  • World Economic Forum. (2025). The Future of Jobs Report 2025.

Directed Systems v0.2.0 March 2026

Petru Arakiss