Back to Insights
March 23, 2026FrameworksRob Murtha

Task Compression and Human Advantage Framework

Applied scoring tables across 19 dimensions of disruptability for software engineering and the general economy. 200 tasks scored, ranked, and banded — from prime disruption candidates to strong human moats.

Find the full framework on LinkedIn


Which work resists automation? Not which jobs — which tasks. This framework decomposes work into individual tasks and scores each one across 19 dimensions of disruptability. The result is a granular map of where AI compression is imminent, where it is partial, and where human advantage remains structurally durable.

These are modeled estimates for typical commercial environments as of early 2026, not predictions about future models. Software engineering is explicitly treated as a bundle of subtasks — boilerplate generation, test writing, bug triage, architecture design, stakeholder translation, deployment review, incident response — rather than a single profession.


Scoring Methodology

Each task is scored against 19 dimensions across three layers:

  • Dimensions 1–7 (Technical Substitutability) — Can the task technically be performed by AI, agents, or robotics?
  • Dimensions 8–13 (Operational Deployability) — Can automation practically be deployed and sustained?
  • Dimensions 14–19 (Market and Governance Resistance) — Trust, liability, adversarial pressure, regulatory friction, human preference, and authenticity.

Each dimension scores 1–5, where 1 means strong resistance to disruption and 5 means high exposure. The Raw Disruptability Score sums all 19 dimensions (range 19–95).

Three metadata fields determine where disruption actually matters: Frequency (F), Value Concentration, and Cost Weight (C). The Priority for Disruption = Raw x F x C. At-risk tables sort by Priority descending. Safety tables sort by Raw Score ascending.

Adjustment Rule: If any of Trust/Social Consequence, Liability/Reversibility, Adversarial Pressure, or Authenticity/Provenance scores a 1, the task is downgraded one full disruption category.


The Five Bands

Score Range Band Meaning
81–95 Prime disruption candidate Highly structured, verifiable, low trust burden. Direct substitution is imminent or underway.
66–80 Strong automation potential Most of the task can be automated. Remaining human involvement is supervisory.
51–65 Agent-assist / partial automation AI can draft, decompose, or perform first-pass execution. Humans direct, validate, and deploy.
36–50 Human-led with AI support Resists clean substitution. AI informs but does not own the judgment, trust, or consequence.
19–35 Strong human moat Deep resistance across multiple dimensions. Live consequence, messy reality, trust, provenance, or embodiment.

Software Engineering — Most At-Risk Tasks

The top of the at-risk list is exactly what the framework predicts: highly codifiable, structured, verifiable work with stable environments and strong cost pressure.

Rank Task Raw Priority Band
1 Boilerplate CRUD endpoint generation 88 2200 Prime
2 Unit test scaffolding for straightforward logic 87 1740 Prime
3 Simple refactors for style, lint, and formatting 85 1700 Prime
4 Type/interface/schema generation 84 1680 Prime
5 Simple SQL query writing and transformation code 83 1660 Prime
6 Form validation and request-shaping boilerplate 83 1660 Prime
7 Basic data transformation and glue scripts 82 1640 Prime
8 Minor dependency upgrade PRs 82 1640 Prime
9 API client / SDK generation from an existing spec 86 1376 Prime
10 Frontend component scaffolding from a design system 84 1344 Prime

Boilerplate coding, test scaffolding, refactors, schemas, standard queries, routine scripts, and documentation sit at the top because they score 4–5 on nearly every dimension of technical substitutability and operational deployability, while scoring low on market and governance resistance.

The transition from Strong Automation to Agent-Assist begins around rank 80, where work becomes increasingly entangled with ambiguous context, cross-team coordination, and partial judgment.


Software Engineering — Safest Tasks

The safest software engineering work clusters around orchestration, integration, exception handling, trust, real-world deployment, and verification.

Rank Task Raw Priority Band
1 Live incident command during a Sev 1 outage with incomplete information 27 270 Strong human moat
2 Architecture tradeoff decisions across business, security, and reliability 28 420 Strong human moat
3 Production go/no-go release approval under ambiguous risk 29 435 Strong human moat
4 Root-cause analysis for a novel multi-system failure 29 290 Strong human moat
5 Stakeholder translation between engineering, product, security, legal, and execs 30 480 Strong human moat
6 Trust-boundary design for sensitive systems and data flows 30 300 Strong human moat
7 Crisis communication during a customer-facing outage 31 310 Strong human moat
8 Handling high-value production exceptions that do not fit playbooks 31 310 Strong human moat
9 Security or safety sign-off for a high-risk change 31 465 Strong human moat
10 Defining technical strategy for an uncertain product direction 32 320 Strong human moat

The durable path is not "write code faster." It is closer to: define the problem, absorb ambiguity, make tradeoffs visible, steer risk, validate reality, and own the consequences.


General Economy — Safest Tasks

Across the broader economy, the safest tasks combine high consequence, messy reality, live judgment, social trust, and provenance.

Rank Task Raw Priority Band
1 Live trauma intervention on an unstable patient 24 240 Strong human moat
2 Emergency command during a fast-moving crisis with incomplete information 25 125 Strong human moat
3 Bedside end-of-life conversation with a family 25 250 Strong human moat
4 Hostage or suicide crisis negotiation in a live situation 26 130 Strong human moat
5 De-escalating a volatile person in physical space 26 260 Strong human moat
6 Live aircraft emergency decision-making in abnormal conditions 26 130 Strong human moat
7 Surgical judgment when anatomy or complications diverge from plan 27 270 Strong human moat
8 ICU triage when symptoms are evolving and signals conflict 27 405 Strong human moat
9 Child safety intervention during an active home or family crisis 27 135 Strong human moat
10 Command decisions in battlefield or disaster triage 27 135 Strong human moat

The safest general tasks cluster in four zones: live consequence, human trust and legitimacy, messy exceptions and weak ground truth, and provenance and signature. Once those weaken, tasks slide toward partial automation much faster.


Key Patterns

What gets compressed first: Structured, verifiable, low-stakes, high-frequency work with stable environments. The cost pressure is real and the trust burden is low.

What stays human: Work that combines ambiguity, consequence, social trust, embodied judgment, and accountability. Not because AI cannot attempt it, but because the failure modes are too expensive and the trust requirements are too high.

The transition zone: Agent-assist work (scores 51–65) is where most of the action is right now. AI can draft, decompose, and execute first passes. Humans direct, validate, and deploy. The review layer is the new bottleneck.

Value migration: Across both software engineering and the general economy, value migrates from production toward orchestration, judgment, and consequence ownership. The skill premium shifts from "can you do this?" to "can you decide whether this should be done, and own what happens next?"


Find the full framework on LinkedIn