v0.1ยทJanuary 2026

|

A Practical Guide
Scroll to explore

A Practical Guide to Working With AI

Something is changing in the nature of work. This guide explains what we know about that change, why humans remain necessary, and how to prepare.

The Shift

In 2021, Sam Altman observed: "AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world."

By 2025, this is measurable:

  • Stanford's WORKBank study found workers in 36% of occupations already using AI for at least a quarter of their tasks.
  • The World Economic Forum projects 39% of existing skills will be transformed or outdated by 2030.
  • MIT estimates AI is now capable of performing work equal to 11.7% of U.S. jobs, though adoption lags capability.

The pattern is clear: work involving information processing is being transformed. Work requiring physical presence, trust, and human judgment is not.

This is not speculation. It is happening now.

Why Humans Are Still Necessary

The question "what should humans do versus machines?" has been studied since 1951. Seventy years of research points to consistent answers.

What AI Does Well

  • Processing large amounts of information quickly
  • Finding patterns in data
  • Generating content based on learned patterns
  • Performing repetitive cognitive tasks without fatigue
  • Maintaining consistency across thousands of decisions

What AI Does Poorly

  • Learning from small amounts of information (humans need few examples; AI needs many)
  • Extrapolating beyond training data to genuinely new situations
  • Building trust and rapport with other humans
  • Reading social and emotional context
  • Operating in unstructured physical environments
  • Making judgments that require understanding unstated assumptions

This is not about what AI cannot do "yet." These limitations reflect fundamental differences in how humans and machines process the world. Some will narrow over time. Others are structural.

The Centaur Principle

After Garry Kasparov lost to Deep Blue in 1997, he created a new form of chess: humans and computers playing together against other human-computer teams.

The surprising result: amateur players with ordinary computers who had developed effective collaboration methods beat both grandmasters and superior hardware playing alone.

This pattern repeats across domains. Mayo Clinic's radiology department grew by 55% between 2016 and 2025 while deploying over 250 AI models. The AI handles measurement and pattern detection. Humans handle diagnosis and patient care. Neither replaces the other.

The lesson: the question is not "human or AI?" It is "how do they work together effectively?"

What Changes For You

The shift in valuable skills is consistent across research:

Decreasing in value:

  • Information processing (gathering, organizing, summarizing data)
  • Routine analysis (applying known frameworks to standard situations)
  • Content production (writing, design, coding that follows established patterns)

Increasing in value:

  • Interpersonal capabilities (building trust, resolving conflict, reading people)
  • Judgment in ambiguous situations (when the right answer is not clear)
  • Physical presence and manipulation (work that requires being somewhere)
  • Domain expertise combined with AI literacy (knowing what to ask and how to evaluate answers)
  • Defining constraints and goals (telling systems what to optimize for)

This does not mean information work disappears. It means the nature of that work changes. You shift from doing the processing to directing the processing and evaluating the output.

The Risks

Seventy years of automation research also documents consistent failure patterns.

The Ironies of Automation

Lisanne Bainbridge documented in 1983 what she called "ironies of automation." They remain unsolved:

  1. Automation makes remaining human work harder. When systems handle routine cases, humans are left with only the exceptions. This concentrates difficulty rather than reducing it.

  2. Skills decay without practice. The abilities needed when systems fail atrophy precisely because systems rarely fail. When they do fail, humans are least prepared.

  3. Monitoring is exhausting. Humans are poor at watching automated systems and waiting for something to go wrong. Vigilance degrades over time.

  4. Opacity is dangerous. When humans cannot understand what a system is doing, they cannot intervene effectively. The Boeing 737 MAX crashes killed 346 people in part because pilots had no mental model of the automation's behavior.

These are not solved problems. Any system where AI directs and humans execute must account for them.

The Substitution Myth

Sidney Dekker and David Woods identified the "substitution myth": the false belief that machines can replace human tasks without changing the nature of work itself.

This never happens. Introducing automation transforms remaining work. Sometimes it makes work easier. Sometimes it makes work harder in ways the designers did not anticipate. The transformation itself must be managed.

How to Prepare

Based on what we know:

1. Develop capabilities AI handles poorly

Focus on skills that remain distinctly human:

  • Building trust and rapport with people
  • Reading context and unstated assumptions
  • Making judgments when information is incomplete
  • Physical presence and interaction
  • Navigating ambiguous social situations

These are not consolation prizes. They are where human value concentrates as AI handles information processing.

2. Learn to work with AI effectively

The centaur principle suggests that effective human-AI collaboration is itself a skill. This means:

  • Understanding what AI systems can and cannot do
  • Knowing how to evaluate AI output (when to trust it, when to verify)
  • Developing methods for directing AI toward useful work
  • Recognizing when AI is wrong or hallucinating

This is not "prompt engineering." That was a transitional skill that already peaked. This is deeper: understanding how to collaborate with systems that have different strengths and weaknesses than you do.

3. Combine domain expertise with AI literacy

The most valuable position is knowing your field deeply AND understanding how AI applies to it. Neither alone is sufficient.

Pure domain experts without AI literacy will find their information-processing work automated. Pure AI generalists without domain expertise cannot evaluate whether outputs make sense.

4. Focus on defining problems, not just solving them

AI systems optimize for objectives they are given. Defining good objectives, setting appropriate constraints, and identifying what should NOT be optimized are increasingly valuable.

This is architectural work: shaping what systems do rather than doing the work yourself.

5. Build feedback skills

AI systems improve from human feedback. The ability to evaluate output precisely, identify specific problems, and communicate corrections clearly becomes valuable.

You are not just using AI. You are training it.

What We Do Not Know

Intellectual honesty requires acknowledging uncertainty:

  • We do not know how fast this changes. The current configuration (AI that can decide, humans that must execute) may last decades or may shift rapidly.

  • We do not know if the centaur pattern holds everywhere. Some domains may see human-AI collaboration outperform both. Others may tip toward full automation.

  • We do not know the labor implications. Whether this creates, transforms, or eliminates jobs at scale is an empirical question we cannot answer in advance.

  • We have not solved the ironies of automation. Forty years of research has not produced reliable solutions. Systems that put AI in charge and humans in support inherit these problems.

Anyone claiming certainty about how this unfolds is guessing.

What "Directed Systems" Means

We use "directed" to describe a specific configuration: systems where AI holds significant decision-making authority while humans retain execution, oversight, and the ability to intervene.

This is one region on a spectrum. At one end, humans decide and AI assists. At the other, AI operates autonomously. "Directed" describes the middle ground that is becoming common: AI determines what should happen, humans make it happen and verify the results.

Understanding this configuration matters because it is where many work relationships with AI are heading. Not full automation. Not simple assistance. Something in between that requires its own skills.

Sources

This guide draws from:

  • Fitts, P.M. (1951). Human engineering for an effective air navigation and traffic control system.
  • Sheridan, T.B. & Verplank, W.L. (1978). Human and Computer Control of Undersea Teleoperators.
  • Bainbridge, L. (1983). Ironies of Automation. Automatica, 19(6), 775-779.
  • Dekker, S.W.A. & Woods, D.D. (2002). MABA-MABA or Abracadabra? Cognition, Technology & Work.
  • Elish, M.C. (2019). Moral Crumple Zones. Engaging Science, Technology, and Society.
  • Shao, Y. et al. (2025). Future of Work with AI Agents. Stanford Digital Economy Lab.
  • McKinsey Global Institute (2025). Agents, robots, and us.

Directed Systems v0.1 January 2026

Petru Arakiss