White Paper Series

The Function Got Redefined Underneath the Work: Labor-Layer Extraction in AI Development and the Architecture That Could Bind It

Document IDSI-WP-010 Versionv2.7.3 | May 2026 AuthorThomas W. Gantz AffiliationSynthience Institute Keywordslabor-layer AI extraction, function-redefinition, licensing-as-architecture, AI governance, ghost workers, automation displacement, comparative feasibility, regulated professions LicenseCC-BY 4.0 StatusPublished DOI: 10.5281/zenodo.20151153

Note on Method: This paper is pre-empirical, theoretical, and methodological. It documents an observed pattern in the labor layer of AI development through primary sources, analyzes the structural mechanism producing the pattern, and assesses the comparative feasibility of available routes for binding architecture against it. The structural argument is falsifiable and the falsification conditions are specified in Section 6. The empirical strength of the labor-layer pattern documentation in Section 1 is asymmetric to the empirical strength of the prescription in Sections 4 and 5. The prescription is structurally specified and architecturally tractable. It has not been deployed at scope, and the deployment-feasibility claims in Section 4 are pre-deployment analysis. The paper does not predict that any of the candidate routes assessed will be taken. It specifies what would have to be true for binding architecture to be deployed.

Abstract

From late 2025 into spring 2026, a pattern emerged at the labor layer of AI development that the deployment-layer governance literature has not adequately named. Engineers at Oracle reported being instructed to record their workflows before being laid off and replaced by recordings (Chow, 2026). Data workers training AI systems deployed by major technology companies and their vendor networks reported median annual earnings of $22,620 with 22% experiencing homelessness (Communications Workers of America and TechEquity Collaborative, 2025). The Hangzhou Intermediate People's Court ruled in April 2026 that AI adoption is a voluntary business strategy rather than an objective circumstance justifying termination, in a case where a quality-assurance worker had refused a 40% pay cut after his quality-assurance work on large-language-model outputs was deemed automatable by the company (Pak, 2026; Swain, 2026). These three cases share a single mechanism: function-redefinition under competitive pressure, operating at the production layer of AI rather than at the deployment layer where existing governance work has focused.

This paper documents the labor-layer manifestation of the trajectory diagnosed at field scale by SI-WP-009 (Gantz, 2026c). It develops the function-redefinition mechanism specifically at the labor layer, distinguishing it from the function-redefinition operating at the deployment layer that prior corpus papers address. It engages the strongest alternative explanations and industry counteraccounts in Section 2.4, including productivity-transition, macro-null, AI-washing, competitive-necessity, and contractor-flexibility framings, and argues that these counteraccounts constrain rather than defeat the layer-specific claim. It then engages the comparative-feasibility question SI-WP-009 explicitly defers. It assesses each of the four candidate routes SI-WP-009 names — compute and cloud gatekeepers, procurement coalitions, treaty blocs, insurance and liability architecture — alongside licensing-as-architecture as a fifth. Licensing-as-architecture has the highest near-term feasibility because it is authority-adjacent to the function-definition layer, not merely leverage-adjacent to the actors in the field.

Licensing-as-architecture is not presented as a general solution to all labor-layer extraction. Its near-term feasibility is highest for the regulated-profession subset of the pattern, where existing professional authority can be extended to bind the function-definition layer. The unregulated supply-chain labor layer remains exposed absent additional architecture. The paper acknowledges the asymmetry between the well-supported documentation of the labor-layer pattern and the structurally-specified-but-architecturally-untested prescription for binding it. The paper does not predict that licensing-as-architecture will be deployed. It specifies what would have to be true for it to be.

Keywords: labor-layer AI extraction, function-redefinition, licensing-as-architecture, AI governance, ghost workers, automation displacement, comparative feasibility, regulated professions

Suggested citation: Gantz, T. W. (2026, May). The Function Got Redefined Underneath the Work: Labor-Layer Extraction in AI Development and the Architecture That Could Bind It. Synthience Institute. SI-WP-010. https://doi.org/10.5281/zenodo.20151153

1. The Cases

1.1 The Engineer Who Recorded Her Workflow

A technical writer and instructor at Oracle, identified in TIME's reporting by the pseudonym Jill, worked at the company across three decades before her layoff in 2026 (Chow, 2026). Before the layoff, she had been mandated to use Oracle's internal AI tools as part of her work; the tools, she reported, "create slop" that her team was frustrated with daily because using it "wasn't saving any time, and was only eating up productivity" (Chow, 2026).

Cynthia Sloan, a former senior director of technical writing at the same company, told TIME after her own layoff: "This is a job that I was so dedicated to for 19 years and gave everything to, and none of it matters" (Chow, 2026).

Oracle's market capitalization exceeded $400 billion at the time of the layoffs. The company had reported its best growth quarter in fifteen years. The layoffs are part of a planned reduction Reuters reported as affecting "thousands" of employees, with TIME citing reports of up to 30,000 positions and TD Cowen analysts estimating a 20,000 to 30,000 reduction could free $8 billion to $10 billion in incremental free cash flow for data-center projects (Reuters, 2026; Chow, 2026). The cuts are driven by a cash crunch from massive spending on AI data center infrastructure. Oracle has been raising debt to fund the buildout. The data centers funded by the same restructuring are infrastructure for AI systems that may further reduce the need for the very categories of labor being eliminated to pay for them.

A laid-off senior manager of software development at Oracle, who asked to remain anonymous, described another dimension of the same pattern: "They're spending so much time trying to reverse-engineer the code that AI generated" (Chow, 2026). Junior engineers using AI to write faulty code, senior engineers spending their time fixing it, then the senior engineers being laid off as the cost of their judgment becomes visible only after it has been removed.

The layoffs at Oracle are not anomalous. Reuters reported in March 2026 that Oracle was planning thousands of job cuts amid a cash crunch from a massive AI data-center expansion, with fiscal 2026 capex expected to be $15 billion higher than a prior $35 billion estimate (Reuters, 2026). Meta moved similarly. Reuters reported in April 2026 that Meta planned a first wave of May 20 layoffs cutting about 10% of its workforce, close to 8,000 employees, with further layoffs planned later in 2026, linked to an AI-driven efficiency push as Zuckerberg invested heavily in AI and reshaped internal operations around the technology (Paul and Horwitz, 2026). Meta's 2026 capital expenditure forecast was raised to $125 billion to $145 billion, and the company was tracking employee interactions to train AI models as part of a broader initiative to build AI agents capable of autonomous work tasks (Paul and Singh, 2026).

What makes the Oracle case structurally significant is not its scale but its visibility into the mechanism. Engineers were mandated to use AI tools whose output was inadequate. Their judgment about the inadequacy was captured by their continued labor reverse-engineering the AI's failures. Their workflows were recorded. Then they were laid off, and the recordings, plus the AI tools, plus the building infrastructure to host more AI tools, became, in the structure the layoffs imply, what Oracle had instead of them.

1.2 The Workers Training the Replacement

The TechEquity Collaborative, in partnership with Alphabet Workers Union-CWA Local 9009 and the Communication Workers of America, surveyed 160 U.S.-based data workers and conducted 15 in-depth interviews. The resulting report, Ghost Workers in the AI Machine, was published on September 30, 2025 (Communications Workers of America and TechEquity Collaborative, 2025).

The findings: median hourly wage of $15, median paid workweek of 29 hours, equivalent to annual earnings of $22,620. 86% of workers worry about meeting their financial responsibilities (63% often, 26% sometimes). 25% rely on public assistance, primarily food assistance and Medicaid. 22% report having experienced homelessness because they could not afford to move or find housing they could afford. 66% spend at least three hours weekly waiting for tasks, with 26% spending more than eight hours waiting; only 30% are paid for time when no tasks are available. 52% of surveyed workers believe they are training AI to replace other workers' jobs; 36% believe they are training AI to replace their own jobs. 26% reported unpaid training time, 53% reported unpaid testing, and 31% had experienced suspension (Communications Workers of America and TechEquity Collaborative, 2025).

Christy Hoffman, General Secretary of UNI Global Union, characterized the pattern in attributable language at the report's release: "The inconvenient truth behind the AI revolution is that it's funneling immense wealth and power to the top on the backs of a shadow workforce doing exhausting, skilled work for poverty wages" (UNI Global Union, October 2025).

The work these data workers perform is not optional infrastructure. Major AI systems generally rely on human labor at multiple points: image annotation for vision systems, response evaluation for instruction-following models, content moderation for safety filtering, output verification for quality assurance. The labor producing the AI systems is what the pay rates and working conditions described above are buying.

The structural significance of the Ghost Workers documentation is not that data work is hard or poorly paid in absolute terms. It is that the labor producing the AI systems is structurally positioned to produce the systems that reduce the demand for that labor, and that the institutional architecture under which the production occurs treats the resulting displacement of the producing labor as the intended output of the production rather than as a cost requiring justification. The function-redefinition operates at this layer too: the labor relation is structured so that workers produce training and evaluation value while bearing the replacement risk, without sharing proportionately in the value created. The redefinition is invisible to the worker in its second phase: a worker who continues to be paid the same per-task rate while the AI is being trained to make the worker's task obsolete experiences only the first phase as such; the second phase appears, when it appears, as the abrupt end of available work.

In late 2025, Business Insider reported on Scale AI's contractor platform Outlier, where workers described their experiences working as AI trainers. A Scale AI spokesperson responded that Outlier gives contractors pay rates and estimated task times at the start of a project and reviews tasks when contributors flag that tasks regularly take longer than expected (Goel and Rollet, 2025). Additional claims about pay-rate reductions and task-cancellation patterns on the Outlier platform remain unverified. The pay-rate and onboarding patterns coincide with technical literature describing methods for training models to evaluate model outputs without human-annotated preference data, reducing reliance on human-provided evaluation signals in that workflow (Wang et al., 2024).

1.3 The Court That Named the Choice

On April 28, 2026, the Hangzhou Intermediate People's Court of Zhejiang Province in China published its ruling in a case involving an employee identified by the court only as Zhou and a technology company the court did not name. Zhou had been a quality-assurance supervisor at the company since November 2022, working on outputs of large language models, verifying response accuracy, and filtering content. His monthly salary was 25,000 yuan (approximately $3,500 at then-prevailing exchange rates). When his role was identified by the company as automatable through AI, the company offered him reassignment to a position with monthly salary of 15,000 yuan, a 40% reduction. When Zhou refused, the company terminated him, citing "organizational restructuring" and "reduced staffing requirements due to technological upgrades" (Pak, 2026; Swain, 2026).

Zhou took the case through arbitration and then through the court system. He won at every stage. Swain's reporting of the Hangzhou Intermediate People's Court ruling characterizes the company's invocation of AI adoption as legal grounds for termination as follows: the termination grounds cited by the company did not fall under negative circumstances such as business downsizing or operational difficulties, nor did they meet the legal condition that made it impossible to continue the employment contract (Swain, 2026).

The reasoning rests on Article 40 of China's Labor Contract Law, which permits termination only under defined "objective circumstances" making continued employment impossible. The court ruled that AI adoption is a voluntary business strategy chosen by the company for its own competitive advantage and does not qualify as an external, unforeseeable objective circumstance (Pak, 2026; Swain, 2026).

The Hangzhou court is not the only Chinese adjudicatory body to have reached this conclusion. A separate arbitration case from December 2024, published as a typical case in December 2025 by the Beijing Municipal Human Resources and Social Security Bureau, involved a mapping-data worker (identified by the surname Liu, employed since July 2009) who was terminated under "objective circumstances" after his employer shifted to AI-led automated map data collection and abolished the navigation-products department. The arbitration body ruled the termination unlawful and supported Liu's claim for compensation (Beijing Municipal Human Resources and Social Security Bureau, 2025; Swezey and Bloomberg, 2026). Public reporting describes at least two Chinese cases in which adjudicatory bodies drew a legal distinction between an external shock that makes a job impossible and an internal business decision that makes a job redundant. The first is grounds for legal termination. The second is not.

The structural significance of the Hangzhou ruling is not that it is binding outside China. It is not. The significance is that the legal reasoning has named what the documented record across the cases above shows: AI adoption as cause of layoffs is a chosen business strategy, not a technological inevitability.

This is the framing that the labor-layer documentation has needed and that the U.S. and EU legal frameworks have not provided. The Chinese courts did not invent the distinction. They named what the structural mechanism had been doing all along.

1.4 What the Three Cases Share

The three cases — Oracle, the Ghost Workers, the Hangzhou ruling — describe a single mechanism operating at three different surfaces of the same system.

Oracle: at the deployment-pipeline layer, engineers whose judgment was the bottleneck enabling the AI system to function are removed from the pipeline and replaced by the recording of their judgment plus the AI tools the recording trains. The function the engineer was performing, "produce work product the AI cannot produce alone," is redefined to "make the AI capable of producing this work without you," and the redefinition completes when the engineer is no longer needed.

Ghost Workers: at the training-data layer, workers are compensated at rates inconsistent with the value produced by their labor, which is the foundation of the AI systems being deployed. The labor relation is structured so that workers produce training and evaluation data under conditions where the future replacement risk is borne by the worker rather than priced into the value of the work.

Hangzhou: at the post-deployment layer, when the AI system has been trained well enough to perform the worker's role, the worker is offered terms that effectively force exit (40% pay cut) or terminated outright. The function the worker was performing, "verify and improve AI outputs," is redefined to "your job no longer exists because the AI is good enough," and the redefinition is presented as objective circumstance rather than as a chosen business strategy.

The three cases differ in surface but share the structural mechanism. Function-redefinition under competitive pressure is the mechanism this paper names.

Other framings are available and partially descriptive. Principal-agent failure between firm leadership and workers under competitive pressure captures part of the dynamic but locates the failure in monitoring and incentive alignment within the firm; function-redefinition operates on what the institution is authorized to claim about the work itself, not on the principal-agent relation. Labor power asymmetry under technological transition describes a more general pattern documented since the industrial revolution but does not specifically engage the AI-development-specific mechanism by which the worker's own labor produces the system that eliminates the worker's necessity. Cost shifting under competitive pressure captures the value-flow dimension but does not engage the function-definition layer at which the redefinition operates.

Function-redefinition is the specific mechanism this paper names because it is the mechanism that links the labor-layer pattern to the deployment-layer pattern SI-WP-009 names. The continuity of mechanism across labor and deployment layers is the structural bridge that lets the design-requirement analysis from SI-WP-009 apply to the labor-layer case. The mechanism is not the only framing applicable to the three cases. It is the framing that makes the cross-layer structural argument tractable.

The competitive pressure is the optimization gradient producing the trajectory documented at field scale in SI-WP-009 (Gantz, 2026c). The function-redefinition is the specific way that pressure operates at the labor layer: not by displacing the worker through external technological shock, but by progressively redefining the function the worker is performing until the worker becomes the input, then the redundancy, then the cost being eliminated.

The three cases are not presented as a representative sample of aggregate labor-market trends. They are existence proofs: they demonstrate that the function-redefinition mechanism is already operating at structurally significant nodes of the AI development pipeline. The structural argument this paper develops does not depend on the frequency of such cases, only on the mechanism's presence and the conditions that enable it.

The next section develops this mechanism formally and distinguishes it from the function-redefinition operating at the deployment layer that prior corpus papers address.

2. The Function-Redefinition Mechanism at the Labor Layer

2.1 What Function-Redefinition Is

Function-redefinition is the mechanism by which interventions designed to constrain a system get routed around through redefinition of what the system is doing. SI-WP-009 develops the mechanism at field scale: interventions binding labor conditions, wage rates, or substitution-against-individual-workers will be routed around because the actors operating inside the optimization gradient can redefine the function the institution is performing in ways that satisfy the binding while continuing to produce the targeted outcome (Gantz, 2026c, Section 6.1.1).

At the labor layer, function-redefinition operates with the same structure but a different set of surfaces. The function being redefined is what the worker is producing for the institution. The redefinition is performed by the institution, not by the worker, and the worker typically has no leverage to refuse the redefinition because the institution controls the labor relation.

The mechanism is particularly powerful at the labor layer because the worker is contributing the very thing that the redefinition uses to eliminate them. The engineer recording her workflow is producing the artifact that makes her replaceable. The data worker training the AI is producing the system that will eventually not need her. The QA supervisor verifying AI outputs is producing the quality signal the AI uses to improve until verification is no longer required. The mechanism uses the worker's own labor as the input that produces the worker's exit.

This is what distinguishes labor-layer function-redefinition from deployment-layer function-redefinition. At the deployment layer, the redefinition operates on what an institution is allowed to claim about the work it produces. At the labor layer, the redefinition operates on what the worker's labor is constituting. The labor is constituting both the work product the institution sells and the system that will eliminate the labor's necessity. The worker is not just being replaced by AI. The worker's labor is what the AI is being built from.

2.2 The Three Layers Where the Mechanism Operates

Function-redefinition at the labor layer operates at three nested layers, each with distinct dynamics.

Supply layer (data work): The worker's labor produces training data, evaluation signals, content moderation judgments, and other foundational inputs to the AI systems. The function-redefinition at this layer is the gradual replacement of human labor with synthetic data and AI-generated training signals. Wang et al. (2024, Self-Taught Evaluators) demonstrate methods for training models to evaluate model outputs without human-annotated preference data, reducing reliance on human-provided evaluation signals in that workflow. Synthetic data distillation methods are emerging for scalable clinical information extraction, reducing reliance on manually labeled data in some information-extraction workflows (Woo et al., 2025). The labor functions that train these systems are the same functions the systems are being designed to partially substitute.

Deployment layer (workflow integration): The worker's labor inside an institution that has integrated AI is increasingly mediated by AI systems. The function-redefinition at this layer is the gradual transfer of judgment, decision-making, and execution from the worker to the AI. The Oracle pattern documented in Section 1 is the deployment-layer manifestation: engineers mandated to use AI tools produce both their work product and the workflow recordings that make the work reproducible without them.

Hiring layer (compounding effect): The worker's access to new employment is increasingly mediated by AI systems that screen, evaluate, and select candidates, and by employer preference patterns in AI-related roles. The function-redefinition at this layer operates on the labor market itself: filtering and preference patterns may systematically disadvantage older workers, workers without specific profile signals, and workers whose backgrounds do not fit the patterns the AI was trained on. The empirical evidence at this layer is more limited than at the supply and deployment layers and bears careful framing. Generation's Age-Proofing AI survey of 2,610 employees over 45 and 1,488 employers across France, Ireland, Spain, the U.K., and the U.S. found employer preference gaps in AI-related roles: U.S. hiring managers reported being 90% likely to consider candidates under 35 versus 32% likely to consider candidates over 60 for roles that regularly use AI tools, with European figures of 86% versus 33% (Generation, 2024). This is employer preference data in AI-related roles, not direct evidence of algorithmic screening bias as such, though the two patterns reinforce each other in the labor market. This hiring-layer analysis is more inferential than the supply- and deployment-layer analyses; it rests on reported employer preferences and the structural logic of function-redefinition rather than on direct documentation of changes in algorithmic screening. The mechanism at this layer is plausible — and, if present, compounds the other two layers — but it is currently less empirically supported.

The supply and deployment layers form the core mechanism; the hiring layer operates as a plausible compounding surface rather than a primary driver. Where present, it reinforces the other two: a worker eliminated from the deployment layer at age 50 enters a hiring market where AI-related roles show substantial age-based preference gaps. Their labor at the supply layer (if they have access to it at all) is compensated at rates structurally inconsistent with the cost of living. The function-redefinition at each layer reinforces the function-redefinition at the others.

2.3 Why This Mechanism Is Not Self-Limiting

The "optimization gradient" is the prevailing set of competitive pressures and incentive structures that reward firms for cost reduction, labor substitution, and capital investment in AI infrastructure under the current market architecture, irrespective of individual actor intent. It is the field-scale dynamic SI-WP-009 names; the labor-layer pattern documented here is a manifestation of it.

A reasonable counterargument is that the mechanism described here cannot continue indefinitely because at some point there is no labor pool left to extract from, no consumers left to buy products, no economic ecosystem for the AI systems to operate within. This is the macroeconomic skepticism position represented in the empirical work of the Yale Budget Lab (The Budget Lab at Yale, 2026; Gimbel, Kendall, and Nunn, 2026), which argues that aggregate labor market effects of AI to date have been smaller than the AI displacement narrative would suggest.

The Yale Budget Lab analysis is correct that aggregate labor market effects to date have not been catastrophic at the macro level. The empirical evidence on whether and when macro-level effects will emerge is currently mixed. The structural argument this paper makes does not require catastrophic macro effects to be imminent. The structural argument is that the function-redefinition mechanism is operating at specific labor layers with measurable severity, documented through primary sources in Section 1, and that the deployment-feasibility analysis in Sections 4 and 5 applies to the labor-layer pattern regardless of whether macro effects emerge on the timeline some forecasters expect. The Yale Budget Lab counter is a macro-level qualifier on the macro-level claim; this paper makes a labor-layer-pattern claim that operates below the macro level.

Acemoglu's NBER work establishes the relevant framing: how AI is being used and developed — its trajectory toward replacing humans rather than extending what they can do — reflects choices about how AI is deployed, not technological inevitability (Acemoglu, 2021, Harms of AI). The trajectory is not technologically determined. It is choice-determined. Different incentive architectures would produce different trajectories. SI-WP-009 develops this structural argument: the current trajectory is what the present incentive architecture rewards, and intervention requires architecture that makes preservation of substantive human contribution the path of least resistance under the same competitive pressure that otherwise drives removal (Gantz, 2026c, Section 6.1). The current trajectory is the one being chosen, not the one being imposed.

Sam Altman, CEO of OpenAI, conceded the framing in a different register in February 2026: there is "some AI washing" where companies are blaming AI for layoffs they would otherwise do, while also acknowledging that some job displacement by AI is real and that he expects more of it over time (Scammell, 2026). The CEO of the company most prominently identified with AI development conceded that AI is being used as cover for layoff decisions that have other drivers. This is not the same as conceding that AI is causing the layoffs. It is conceding that the layoffs are choices being made by the actors who are also choosing the AI development trajectory, with AI providing the rationalization rather than the cause.

The mechanism is not self-limiting because the actors operating inside it are not optimizing for ecosystem stability. They are optimizing for competitive position inside the gradient. The gradient does not stop until external architecture binds it. SI-WP-009 develops this structural argument at length and arrives at the design requirement specified in its Section 6.3 (Gantz, 2026c). The labor-layer paper does not need to re-derive that argument; it documents the mechanism's operation at the labor layer specifically.

2.4 Alternative Explanations and Industry Counteraccounts

The labor-layer mechanism named above does not require the broader claim that AI has already produced catastrophic aggregate labor-market collapse. Several counteraccounts deserve direct engagement.

First, the productivity-transition account argues that AI is primarily augmenting work rather than replacing workers, while creating some jobs, evolving others, and eliminating some roles as part of a manageable labor-market transition. OpenAI's October 2025 Workforce Blueprint articulates this account explicitly: it frames the labor-market effect of AI as lying between job-loss panic and productivity-boom optimism, states that some jobs will disappear while others evolve or are created, and characterizes the present pattern as more enabler than replacer (OpenAI, 2025). This account is important because it concedes disruption while rejecting a simple displacement narrative.

Second, the macro-null account argues that aggregate labor-market data do not yet show measurable AI-driven employment collapse. The Yale Budget Lab's April 2026 tracker found no current relationship between measures of AI exposure, automation, or augmentation and changes in employment or unemployment, while emphasizing that better data are needed and that the analysis is not predictive of future effects (The Budget Lab at Yale, 2026). A later Budget Lab analysis likewise found no statistically or economically significant labor-market effects yet, while acknowledging that exposure metrics can hide offsetting effects and may fail to distinguish replacement from augmentation (Gimbel, Kendall, and Nunn, 2026).

Third, the mixed-causality or AI-washing account argues that some layoffs attributed to AI are layoffs that would have happened anyway, with AI serving as a narrative or investor-facing justification rather than a direct cause. Sam Altman made this point explicitly in February 2026, while also acknowledging that some job displacement by AI is real and likely to become more visible over time (Scammell, 2026). This account matters because it prevents the paper from treating every AI-invoking layoff as proof of AI-caused displacement.

Fourth, the competitive-necessity account argues that firms facing AI-driven market transformation must redirect spending toward AI infrastructure, reduce headcount, and reorganize labor in order to remain viable. This account does not need to be dismissed as bad faith; Reuters reporting on Oracle and Meta shows major firms linking workforce reductions, AI infrastructure investment, and internal restructuring in ways consistent with competitive-pressure claims (Reuters, 2026; Paul and Horwitz, 2026; Paul and Singh, 2026). But the competitive-necessity argument carries binding force only if the constraint structure it invokes is treated as exogenous rather than as a product of the field-level architecture all firms are operating within. The competitive-necessity argument collapses once the architecture itself is treated as the variable rather than as fixed background — a point the Hangzhou court reached at law (Pak, 2026; Swain, 2026) and Acemoglu develops at theory (Acemoglu, 2021). Change the conditions under which all competitors operate, and the constraint shifts.

Fifth, the contractor-flexibility account argues that AI data work platforms provide flexible access to paid work, visible task rates, and opportunities for workers who might otherwise lack comparable income streams. Platform-side responses belong in the evidentiary record. Scale AI, responding to Business Insider reporting on its Outlier platform, stated that Outlier gives contractors pay rates and estimated task times at the start of projects and reviews tasks when contributors flag that tasks regularly take longer than expected (Goel and Rollet, 2025).

These counteraccounts constrain the paper's claim. They do not defeat it. The claim made here is layer-specific rather than macro-totalizing. AI may augment many workers; aggregate labor-market data may not yet show mass displacement; some companies may invoke AI as cover for layoffs with other causes; competitive pressure may be real; and some contractors may value platform flexibility. None of those points disproves the narrower structural claim documented in this paper: in specific labor-layer contexts, workers are producing the data, evaluations, workflow records, and verification signals that reduce their own bargaining position or eliminate their role, while the institutional architecture treats the resulting displacement as efficiency, restructuring, or objective necessity. The structural claim is compatible with the more measured industry framings precisely because it operates at a layer the macro-null account does not measure and the productivity-transition account does not foreground.

The mechanism is not self-limiting, the counteraccounts do not defeat it, and the gradient does not stop until external architecture binds it. The next section addresses what would have to be true for binding architecture to be deployed.

3. The Design Requirement and What Would Need to Bind

3.1 The Design Requirement Recap

SI-WP-009 specifies the design requirement for any sufficient intervention against the trajectory: the intervention must bind at three levels in mutually reinforcing form (the individual, the organizational, the regulatory), and at each level it must satisfy the structural condition Kingsbury Barry and Montanez (2026) name in their work on governance design — that compliant systems can still fail the humans they are built to serve when the architecture does not make sustained compliance the path of least resistance for the operating actors (Kingsbury Barry and Montanez, 2026, particularly Section 5.3 of Governance Without Consent; Gantz, 2026c, Section 6.3). The three levels can be summarized briefly. The individual level concerns the behavior of the practitioner or worker performing the regulated function: whether their judgment and substantive engagement are present in the work product or have been displaced by AI-generated output passed through ceremonial human sign-off. The organizational level concerns the institution employing the practitioner: whether its workflows, incentive structures, and operational architecture preserve or erode the conditions under which substantive individual-level engagement can occur. The regulatory level concerns the external authority licensing the institution and the practitioner: whether the regulatory framework's conditions for continued authorization preserve or erode the institutional conditions required at the organizational level. SI-WP-009's design requirement is that binding at any one level without the other two is routed around through the unbound levels. Bind only the individual practitioner without binding the institution, and the institution restructures the practitioner's work. Bind only the institution without binding the regulator, and the institution forum-shops or weakens regulatory cooperation. Bind only the regulator without binding individual and institutional levels, and the regulator's authority becomes nominal because the operational substrate it claims to govern has restructured around it. What is required, articulated within Institute vocabulary, is that any binding architecture must be self-sustaining under the operating conditions of the regulated actors rather than dependent on sustained discipline against immediate incentive cost — a structural condition for which Kingsbury Barry and Montanez's published work provides the originating formulation.

The condition can be stated plainly: if it is easier for a regulated actor to comply than to route around the constraint, the constraint binds; if routing around is cheaper than compliance, the constraint is a dead letter. The licensing bodies that might impose AI-related conditions already operate under finite capacity and political pressure; the architecture must be such that enforcing substantive practitioner engagement is — for them — less costly than letting it erode. That condition is not met today.

At the labor layer specifically, the design requirement entails that the intervention must operate not only on the labor relation between the individual worker and the individual employer (which is where most labor regulation has historically operated), but also on the institutional architecture within which the labor relation is structured, and on the regulatory framework within which institutional architecture itself is bound. An intervention that binds the worker-employer relation but not the institutional architecture will be routed around through institutional restructuring. An intervention that binds the institutional architecture but not the regulatory framework will be routed around through forum shopping and jurisdictional arbitrage.

3.2 What the Function-Definition Layer Is

Building on SI-WP-009's structural framing, the relevant binding layer is the function-definition: what the institution is authorized to claim about the work it produces, and what must constitutively be present in the work for the institution to retain that authorization (Gantz, 2026c, Section 6.1.1).

At the deployment layer, this is straightforward: the medical institution is authorized to claim it produces medical care, which requires that licensed physicians substantively engage in the diagnostic process. An institution that has redefined "medical care" to mean "AI-generated recommendations signed off by a physician who has been progressively excluded from substantive engagement" is no longer producing medical care; it is producing something else, and the licensing framework that authorized it to claim medical care no longer applies.

At the labor layer, the function-definition does not operate through a general authorization to employ workers. Most firms do not need professional licensure merely to employ labor. The relevant authorization is narrower: the institution's authorization to perform, market, bill for, certify, or hold itself out as producing regulated professional work.

Licensing-as-architecture becomes available where the labor relation is constitutive of that regulated function. A hospital is not merely an employer; it is an institution authorized to provide medical care. An audit firm is not merely an employer; it is authorized to produce audit work under conditions of independence, competence, and professional accountability. A law firm is not merely an employer; it operates through licensed practitioners whose professional obligations constrain what the institution may present as legal work. An engineering firm performing regulated engineering work is not merely an employer; it operates through responsible professional judgment tied to public-safety authorization.

The binding point is therefore not employment as such. The binding point is the institution's authorization to claim that AI-mediated work remains regulated professional work when the labor, oversight, and practitioner-engagement conditions that make that claim true have been altered. A licensing framework that says "you are authorized to provide regulated professional work only if the labor relations constitutive of that work preserve substantive licensed engagement, accountable supervision, and auditable human responsibility" binds at the function-definition layer. An institution that redefines its labor practices so that those conditions are no longer present has redefined itself out of the licensed function, even if it continues to employ people in some ordinary labor-law sense.

This is the structural distinction between wage-rate regulation, which binds the employment relation as such, and licensing-as-architecture, which binds the function the institution is authorized to claim. Wage-rate regulation operates on what the institution pays. Licensing-as-architecture operates on what the institution is allowed to call its work product. The two operate at different layers of the same system, and the redefinition pressure that routes around the first does not route around the second so long as the function-definition is maintained.

3.3 Why Existing Protections Have Failed

Wage-rate regulation has been the dominant U.S. labor protection mechanism since the New Deal. It was effective in the post-war period largely because the institutional architecture in which workers operated was relatively stable. The function-redefinition mechanism documented in Section 2 is what makes wage-rate regulation now insufficient.

The data work supply chain operates through a layered structure: AI companies contract with vendors (Scale AI, Toloka, Appen, Outlier), who contract with sub-contractors, who contract with individual workers as gig contractors, often through platform intermediaries. The wage rate at any single layer may be formally defensible or difficult to challenge under the applicable classification and jurisdictional framework. The aggregate effect is that the labor producing the AI systems is compensated at rates structurally inconsistent with the value produced, even where no single layer presents an actionable wage violation (Communications Workers of America and TechEquity Collaborative, 2025; Goel and Rollet, 2025).

Anti-discrimination law has been the dominant U.S. labor protection mechanism for older workers since the Age Discrimination in Employment Act of 1967. The function-redefinition mechanism at the hiring layer documented in Section 2 is what makes ADEA-based protection now insufficient. Employer preference patterns and AI-mediated filtering may produce disparate impact on older workers in AI-related roles, but the patterns are difficult to challenge through individual-disparate-treatment claims because the filtering is mediated through hundreds of features that no single applicant can prove was used to exclude them.

As originally enacted, the European Union's AI Act placed major high-risk obligations, including employment-related Annex III systems, on the 2 August 2026 application timeline. At publication time, however, that timeline is unstable: a 7 May 2026 provisional agreement between Council and Parliament negotiators would delay application to 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products, pending formal endorsement and adoption (Council of the European Union, 2026; Chee, 2026). The instability strengthens rather than weakens this paper's treaty-bloc assessment: even the strongest existing treaty-bloc framework remains procedurally vulnerable to delay and does not presently bind AI-driven layoffs at the function-definition layer. The AI Act, in any of its possible timelines, binds at the deployment layer (how AI is used) but not at the function-definition layer (whether AI deployment is authorized in the first place under labor conditions consistent with binding institutional architecture).

The Hangzhou ruling, by contrast, operates closer to the function-definition layer than the AI Act does in the specific context of termination. It treats AI adoption not as an objective condition that automatically changes the legal status of the employment relation, but as a business strategy whose labor-law consequences remain subject to justification. By ruling that AI adoption is a voluntary business strategy not qualifying as objective circumstance under Article 40, the court has bound the legal authorization of AI-driven layoffs to a question of business strategy rather than to a question of operational fact. The institution that wishes to lay off workers because of AI must first establish that the layoff meets the legal conditions for termination, which the court has now ruled does not include "we adopted AI" as a sufficient ground.

The binding layer is the function-definition layer. The next section assesses which routes can reach it.

4. Comparative Feasibility of Available Routes

SI-WP-009 Section 6.3.2 names four candidate routes for landscape-altering action against the trajectory: compute and cloud gatekeepers, procurement coalitions, treaty blocs, and insurance and liability architecture (Gantz, 2026c). The paper does not rank or develop these routes; it leaves the feasibility question explicitly open. This section engages the feasibility question for the labor-layer case, addressing each of the four named routes in turn and then adding licensing-as-architecture as a fifth candidate, and arguing that licensing-as-architecture has the highest near-term feasibility on the relevant timeline because it is authority-adjacent to the function-definition layer rather than merely leverage-adjacent to the actors operating in the field.

The four-route taxonomy assessed here inherits from SI-WP-009's Section 6.3.2 framing. Other routes exist that this paper does not assess, including tort-liability expansion through class action, shareholder action against publicly traded firms with AI-driven layoffs, bankruptcy and restructuring-law constraints, and fiduciary-duty claims under ERISA for AI-related restructurings. The decision to limit this paper's comparative analysis to the SI-WP-009 four-route taxonomy plus licensing-as-architecture is a scope decision; it is not a claim that the assessed routes exhaust the available space.

The relevant distinction across the routes is not infrastructure versus no infrastructure. Compute providers, procurement systems, treaty blocs, and insurance markets all possess existing infrastructure of some kind. The distinction is whether the infrastructure is authority-adjacent to the function-definition layer or merely leverage-adjacent to the actors operating in the field. Authority-adjacent infrastructure can be extended to bind the function-definition because the underlying authority already covers the function being claimed. Leverage-adjacent infrastructure must construct new authority over the function before it can bind, even where the infrastructure providing the leverage already exists.

4.1 Compute and Cloud Gatekeepers

The compute supply chain is concentrated. A small number of compute and cloud providers control the substantial majority of AI training compute. A coordinated decision by these actors to condition compute access on labor practices of the institutions using the compute would produce immediate binding pressure on the entire AI deployment ecosystem.

Compute and cloud gatekeepers possess substantial technical leverage. They already operate compliance, trust-and-safety, acceptable-use, know-your-customer, procurement, and audit-like processes. The infrastructure is not absent.

The structural difficulty is that the leverage is authority-misaligned with the function-definition layer. Compute providers do not possess legitimate labor-regulatory authority over the professional functions their customers are claiming to perform. They are also themselves institutions operating inside the optimization gradient that produces the trajectory documented in this paper. They are profiting from the same dynamics. They are subject to competitive pressure: if a major provider conditioned compute access on labor practices, customers would route to alternatives — domestic Chinese chips, alternative architectures, smaller providers that decline to impose conditions. The competitive pressure makes voluntary deployment of binding architecture by compute and cloud gatekeepers structurally improbable.

Net assessment: Substantial leverage, but authority-misaligned with the function-definition layer; active counterparty optimizing against the binding; structurally weak.

4.2 Procurement Coalitions

Procurement-coalition interventions would condition government and large enterprise purchasing of AI systems on the labor practices of the producing institutions. The structural attraction is real: if a coalition of major procurement actors (national governments, multinational enterprises, regional consortia) conditioned their multi-billion-dollar AI procurement spend on demonstrable labor-condition standards, the affected institutions would feel the constraint immediately.

Procurement infrastructure is not absent. Public procurement standards exist. FedRAMP exists. ESG and supplier-labor standards exist. The infrastructure has been used to condition procurement on labor practices in other domains.

But this infrastructure is leverage-adjacent to the function being claimed and does not natively possess authority over it. Constructing a coalition of national procurement bodies operating to common AI labor-condition standards would require multi-year negotiation across governments with different labor frameworks, different political dynamics, and different relationships with the AI industry. The coordination problem is substantial. Common standards, certification mechanisms, cross-jurisdictional audit infrastructure, and enforcement processes would have to be built. The infrastructure exists in domain-adjacent forms but not in the form required.

The deeper structural problem is that procurement standards bind only the institutions selling to the coalition. The institutions producing AI systems for the consumer market, or producing them for export to jurisdictions outside the coalition, would not be directly bound. The pressure produced by procurement coalitions would be substantial but partial.

Net assessment: Substantial leverage, partial existing infrastructure, but authority-misaligned with the function-definition layer; coordination construction required on multi-year timeline; real but slow.

4.3 Treaty Blocs

Treaty-bloc interventions would condition trade access, mutual recognition, or other treaty-mediated benefits on labor-condition standards in AI development. The European Union's AI Act is the most concrete current instance of a treaty-bloc framework that touches on AI-deployment questions, though as Section 3.3 notes, the AI Act does not bind at the function-definition layer for layoffs and does not directly include labor-condition standards for AI-development labor.

The structural attraction of treaty blocs is regulatory leverage at scale. A treaty bloc with major-market access conditioning that access on labor-condition standards produces extraterritorial effects (the Brussels effect operating through the EU; analogous effects available through other major-market blocs). The structural difficulty is procedural vulnerability and authority misalignment. The recent EU AI Act timeline illustrates the vulnerability: as originally enacted, high-risk obligations would have taken effect on 2 August 2026; the 7 May 2026 provisional agreement would delay application to 2 December 2027 for stand-alone systems and 2 August 2028 for embedded systems, pending formal endorsement (Council of the European Union, 2026; Chee, 2026). Treaty amendments and major regulatory packages operate on legislative and diplomatic cycles measured in years, and the cycles themselves are procedurally vulnerable to delay.

The European Trade Union Confederation has called for stronger protections; legal scholars have proposed a European AI Social Compact that would combine employment support, training, and social protections to cushion displacement (Constantin, 2026). None of these proposals has been enacted. By the time the proposals reach the legislative cycle and the treaty machinery, the layer-level architecture being built now will have already been built. The regulation will arrive at a steady-state different from the one it was designed to regulate.

The further structural problem is that treaty blocs do not bind the U.S. or Chinese AI markets directly. Treaty regulation produces extraterritorial effects, but the effects are partial and lagged. Multilateral treaty negotiation incorporating major AI-producing jurisdictions would face the same coordination problems as procurement coalitions but at higher diplomatic friction.

Net assessment: Substantial leverage, partial existing infrastructure in jurisdictions with treaty machinery, but authority-misaligned with the function-definition layer at present, and procedurally vulnerable to delay; multi-year timeline under all conditions, with the timeline itself unstable. Treaty blocs occupy an intermediate position: they possess regulatory authority, and in principle could be extended to bind the function-definition layer for labor. The difficulty is that such an extension would require constructing that specific layer of authority within the bloc's machinery — negotiating what counts as "substantive professional engagement" in medicine or law across multiple national frameworks. Licensing bodies already hold precisely that layer. The distinction is one of distance to the target, not of regulatory power in the abstract.

4.4 Insurance and Liability Architecture

Insurance pricing is reactive in design. Insurers price risk based on actuarial data; they do not typically lead by pricing for risks that have not yet been actuarially established. By the time the labor-layer harm patterns documented in Section 1 are reflected in actuarial data, the harms will have already occurred at sufficient scale to produce the data.

Insurance infrastructure is not absent. EPLI, D&O, E&O, cyber, workers' compensation, and emerging AI insurance products already exist. The infrastructure has been used to price labor-related risks in other domains.

But this infrastructure is leverage-adjacent to the labor-layer harms, oriented toward indemnifying deployers rather than pricing the harms onto them. The institutions producing the AI systems are also the institutions purchasing insurance, and the insurance market for AI deployment is small enough that the major customers can substantially shape product offerings. One prominent emerging AI-related insurance product is "AI errors and omissions" liability for the deployers, which is structured to indemnify the deployers against the consequences of AI failures rather than to price the labor-layer harms onto the deployers.

Liability architecture (tort law, statutory liability for specific AI-related harms) has more structural promise than insurance pricing because it operates through courts rather than through actuarial markets. The Hangzhou ruling itself is an example of liability architecture operating through courts, and the ruling does bind a function-definition-layer question (whether AI adoption qualifies as an objective circumstance under Article 40) for the specific employment relation before the court. The structural difficulty is that liability architecture is fragmented across jurisdictions, requires individual cases to be brought and won, and produces effects through precedent that propagate slowly. The distinction the comparative-feasibility argument is drawing is therefore not that liability cannot ever bind at the function-definition layer — Hangzhou shows that it can in the individual case — but that liability operates through case-by-case precedent propagation rather than through systematic architecture native to the layer being bound. The function-definition layer can be touched by liability in particular cases; it cannot be bound at field scope by liability operating without separate architectural support.

Net assessment: Substantial infrastructure but authority-misaligned with the function-definition layer at field scope; reactive timeline for insurance; precedent-propagation timeline for liability; vulnerable to capture; structurally weak for systematic labor-layer binding, though specific liability cases (like Hangzhou) can produce binding effects at the individual-case scope through which liability operates. Liability architecture is thus not authority-misaligned in the same way compute gatekeepers are; it is authority-proximate but precedent-propagation-bound. It can touch the function-definition layer case by case, as Hangzhou shows, but cannot bind it at field scope without a supporting architecture that licensing provides.

4.5 Licensing-as-Architecture

Licensing-as-architecture is different from the four routes assessed above because professional regulators already possess authority over the function that institutions are claiming to perform. The required expansion is not the construction of an entirely new field of authority from nothing. It is the use or extension of existing professional authority so that AI-mediated labor substitution cannot redefine the professional function while retaining the authorization attached to that function.

The relevant institutional infrastructure exists in distributed form across regulated professions in the United States. In medicine, state medical boards license individual physicians, while state health departments and CMS Conditions of Participation license hospital systems. The Joint Commission provides voluntary accreditation that is in practice required for most hospital reimbursement. In law, state bar associations license individual attorneys, and state supreme courts and disciplinary boards govern attorney conduct. The ABA Model Rules provide a framework that most states adopt with modifications, including rules on supervision and competence that arguably apply to AI-mediated legal work. In engineering, state professional engineering boards license individual engineers, and state contractor licensing boards license engineering firms in some jurisdictions. Federal frameworks (PE licensure reciprocity, federal contractor requirements) operate in parallel. In accounting, state boards of accountancy license individual CPAs, while the AICPA and PCAOB provide federal-level standards for audit firms. SEC rules on auditor independence operate at the firm level. The precise scope of each body's authority varies by jurisdiction. The pattern of authority-adjacency to the function-definition layer holds across all of them: in each domain, an existing licensing infrastructure already possesses authority over the function the institution is claiming to perform.

The table below illustrates how this distributed authority maps to AI-deployment binding per profession.

Profession / domain Existing authority base Target of condition Existing hook Required expansion Labor-layer relevance
Medicine State medical boards, state health departments, CMS Conditions of Participation, accreditation bodies such as the Joint Commission Physicians, hospitals, clinical workflows, institutional conditions of care Licensure, credentialing, institutional certification, reimbursement-linked conditions AI deployment conditions tied to substantive clinical engagement, accountable supervision, and auditable care processes Prevents ceremonial sign-off and progressive exclusion of licensed practitioners from AI-mediated care
Law State supreme courts, state bars, disciplinary boards, professional-conduct rules Attorneys, supervised legal work, possibly firms through supervision and unauthorized-practice rules Competence, supervision, confidentiality, unauthorized practice of law Disclosure, supervision, and accountability standards for AI-mediated legal work; possible firm-level expansion where authority is limited Preserves substantive licensed responsibility where AI systems draft, screen, evaluate, or advise within legal work
Accounting / audit State boards of accountancy, PCAOB, SEC auditor-independence and audit-quality rules CPAs, audit firms, audit workflows, assurance claims Independence, audit quality, professional skepticism, firm standards AI deployment and labor-substitution conditions tied to audit evidence, review, and accountable professional judgment Prevents replacement of substantive audit judgment with AI-generated outputs presented as professionally assured work
Engineering State professional engineering boards, responsible-charge doctrines, contractor / firm licensing where applicable Licensed engineers, engineering firms, public-safety work products Responsible charge, public-safety obligations, licensure of professional engineering work Firm-level AI deployment standards where available; practitioner-level conditions where firm authority is limited Preserves accountable engineering judgment in AI-mediated design, review, inspection, and certification

The table is illustrative, not exhaustive. The infrastructure varies substantially across jurisdictions: medicine is closer to uniform across U.S. states because of CMS Conditions of Participation and Joint Commission accreditation; law varies more (integrated vs. independent bar, state supreme court vs. state bar disciplinary authority); engineering varies most (firm-level licensing exists in some jurisdictions and not others). The table's purpose is to show that licensing-as-architecture is not a single national mechanism but a family of authority-adjacent interventions whose operational form varies by profession and jurisdiction.

What none of these institutional structures currently does, with the partial exception of work-in-progress in medicine that SI-WP-008 diagnoses (Gantz, 2026b), is condition licensure on the institutional conditions under which AI deployment occurs in the licensed institutions. SI-WP-008 develops the ceremonial-governance failure mode in medicine and argues that consequence architecture must be designed differently in domains where deployment failure produces catastrophic personal harm. It diagnoses the failure that licensing-as-architecture would address. SI-WP-010 proposes licensing-as-architecture as the structural intervention that would address that failure, extended across regulated professions.

The deployment of licensing-as-architecture requires different mechanisms in different professions. In some cases, existing licensing authority can be extended to conditional licensure with AI deployment standards (state medical boards extending physician licensure conditions; SEC extending auditor independence rules to AI deployment in audit work). In other cases, statutory expansion of existing licensing bodies' authority is required (state engineering boards in jurisdictions where firm-level licensing authority is currently limited; state bar associations in jurisdictions where firm-level authority is currently limited). In still other cases, federal frameworks may need to be modified (CMS Conditions of Participation extended to include AI deployment conditions; PCAOB rules extended).

The pattern is authority-adjacent extension of existing institutional infrastructure rather than authority-misaligned leverage. The licensing bodies, the certification mechanisms, the audit infrastructure, the enforcement processes — these exist in distributed form across regulated professions, and crucially they possess authority over the function the institution is claiming to perform. What is required is the extension of their authority and operating practices to encompass AI deployment patterns. The extension is not trivial; in some jurisdictions and some professions it requires legislative action. But it operates against an existing base of authority that natively covers the function, rather than requiring construction of new authority over the function from scratch.

This is the structural difference between licensing-as-architecture and the four routes assessed in Sections 4.1 through 4.4. The four named routes possess leverage but not native authority over the professional function-definition layer. Licensing-as-architecture possesses authority adjacent to the function being claimed.

Licensing-as-architecture also has the structural advantage of being domain-specific in ways the four routes assessed in Sections 4.1 through 4.4 are not. Each profession's licensing infrastructure has authority and expertise specific to that profession's institutional architecture. Medical licensing infrastructure understands hospital systems. Legal licensing infrastructure understands law firms. Engineering licensing infrastructure understands engineering firms. The cross-profession structural argument is that each of these infrastructures can apply licensing-as-architecture within its own domain expertise, producing binding architecture per profession that the other routes cannot produce.

To make the proposal concrete, consider a state medical board: it could condition hospital licensure on proof that a specified fraction of AI-assisted diagnostic outputs undergo documented, contemporaneous physician review before being signed off — not a post-hoc audit trail but a logged attestation embedded in the clinical workflow. Random sampling and complaint-driven investigation would enforce the condition. A hospital that could not produce those attestations would risk its credentialing. The condition does not prescribe how AI is used; it prescribes what must constitutively be present for the work to remain licensed medical care. Comparable conditions, calibrated to each profession's substantive-engagement standard, can be specified for law (substantive lawyer judgment behind AI-drafted briefs), engineering (responsible-charge engineer engagement with AI-assisted design and analysis), and audit (substantive professional skepticism applied to AI-generated audit evidence). Operational specification of each profession's substantive-engagement standard is deferred to subsequent practitioner-guide work.

What licensing-as-architecture does not provide is universal coverage. It binds only the regulated professions. The data workers who are not licensed in any profession, who are not employed by licensed institutions, who exist in the gig-contractor supply chain that is structurally outside professional licensure — these workers are not protected by licensing-as-architecture as currently constituted. Other routes (potentially including new categories of regulation, or expansion of professional licensing to include AI-development-specific roles) would be required to extend protection to them.

Net assessment: Authority-adjacent to the function-definition layer; existing infrastructure can be extended rather than constructed greenfield; faster timeline than the leverage-adjacent routes. Highest near-term feasibility for the regulated-profession case. Does not extend to the unregulated supply-chain layer.

4.6 Comparative Conclusion

The five candidate routes face two qualitatively different kinds of problem. The four routes SI-WP-009 names (compute and cloud gatekeepers, procurement coalitions, treaty blocs, insurance and liability architecture) possess leverage but not native authority over the professional function-definition layer; they would have to construct or coordinate authority capable of reaching the function-definition layer before they could bind. Licensing-as-architecture possesses authority adjacent to the function being claimed; the required work is extension rather than construction. The two are different kinds of problems operating on different timelines because constructing or coordinating authority capable of reaching a function is slower than extending existing authority over the same function.

For context: treaty ratification or major regulatory package negotiation typically runs 5–10+ years; cross-jurisdictional procurement coalition standardization runs 3–7 years; insurance actuarial repricing is reactive and lagged; state board rulemaking or statutory expansion for licensing conditions runs 1–4 years. Licensing is fastest relative to the alternatives, not fast in absolute terms. These are comparative timelines. None of them are fast enough for the workers already affected. The claim is only that licensing is the least slow. "Near-term" in this paper refers to the 2026–2030 horizon over which the labor-layer pattern documented in Section 1 is currently unfolding.

This does not mean licensing-as-architecture will be deployed. The extension problem is real and is closed only by sustained professional and public pressure on existing institutional actors who possess partial authority. The pressure does not currently exist at the scope required. Among the routes SI-WP-009 names plus licensing-as-architecture, licensing-as-architecture is the route most likely to be deployed on the relevant timeline if the pressure emerges. Other routes outside this five-route taxonomy (tort liability expansion, shareholder action, bankruptcy-law constraints, fiduciary-duty claims) may also produce binding effects and are not assessed in this paper.

5. The Authority-Extension Pressure Problem and What Would Close It

5.1 The Structure of the Problem

Existing licensing infrastructure has partial authority over the professional functions institutions are claiming to perform. The institutional actors operating that infrastructure are not currently using their authority or seeking statutory expansion of it for AI-deployment binding. The authority-extension pressure problem is the difference between what the licensing infrastructure could do (with existing authority or with achievable statutory expansion of existing authority) and what it is doing.

The bounded-rationality framework SI-WP-007 develops (Gantz, 2026a) provides the foundation. Bounded rationality, in the sense Simon (1947) introduced and SI-WP-007 applies to institutional actors, names the structural fact that decision-makers face finite cognitive and political capacity. They must allocate that capacity among competing priorities; they do not optimize across all available actions. They satisfice, taking the actions whose immediate cost is low enough relative to the immediate benefit to clear the threshold their finite capacity permits. SI-WP-007 develops this framework specifically for the actors whose authority would have to be extended for relational AI deployment to scale, and identifies a structural pattern: when the cost of acting on a problem is immediate and concrete while the benefit is diffuse and statistical, satisficing actors will systematically underuse the authority they possess to address the problem.

Licensing bodies operate under exactly these conditions. They have finite capacity, finite political capital, and competing priorities. Acting on AI deployment requires capacity and political capital they currently allocate to other priorities. The cost of action is immediate and concrete (procedural complexity, political pushback from licensed institutions, regulatory uncertainty); the benefit is diffuse and statistical (the prevention of harms that would otherwise occur to the public the licensing body serves). Under bounded rationality, the rational satisficing actor allocates capacity and political capital to immediate concrete priorities rather than to diffuse statistical ones.

Kingsbury Barry and Montanez's published work specifies the design condition that would change this calculation: the architecture must make the licensing body's use of its authority the easier path than continued non-use, rather than relying on sustained discipline against immediate incentive cost (Kingsbury Barry and Montanez, 2026). This requires changes that the licensing body itself cannot fully produce. The pressure has to come from outside.

5.2 Where the Pressure Comes From

Three sources of pressure on licensing bodies are structurally available:

Professional pressure from licensed practitioners. Licensed physicians, attorneys, engineers, and accountants have institutional standing within their licensing bodies. When practitioners identify a deployment-layer harm that the licensing body has not addressed, the practitioners can raise the issue through the licensing body's existing governance processes. Work in development on board certification in the AI age (in the medical case) is an instance of this pressure operating. The structural significance of practitioner-side pressure is that it comes from inside the licensed profession and cannot be dismissed as external interference.

Public pressure through regulatory comment, journalism, and political accountability. Licensing bodies are typically accountable to state legislatures, state attorneys general, or state professional regulatory commissions. Public attention to deployment-layer harms produces accountability pressure on these political principals, who in turn pressure the licensing bodies. The Hangzhou ruling's wide international coverage demonstrates how public attention shifts what is legally and politically tractable.

Liability and reputational pressure. Licensing bodies that fail to act on identifiable systemic harms face reputational risk and, in some jurisdictions, legal exposure. The combination of practitioner-side pressure and public-side pressure produces reputational risk that licensing bodies do not absorb without responding.

The three pressures compound. Professional pressure provides the substantive field knowledge (the experts inside the field articulating what is happening). Public pressure provides the political accountability (the legislators and attorneys general who oversee the licensing bodies). Liability and reputational pressure provides the reputational consequence (the institutional cost of inaction).

5.3 The Specific Mechanisms Available

Three operational mechanisms are structurally available within authority-adjacent extension of existing licensing infrastructure:

Conditional licensure. Licensed institutions or licensed practitioners deploying AI must demonstrate that the deployment satisfies specified conditions for the licensure to be maintained. The conditions can include labor-relation requirements (workers performing functions constitutive of the licensed work product cannot be progressively excluded from substantive engagement), AI deployment requirements (AI systems used in the licensed work must meet calibration and oversight standards), and reporting requirements (institutions or practitioners must disclose AI deployment patterns to the licensing body). The specific mechanism for implementing conditional licensure varies by profession and jurisdiction. In some, existing licensing authority covers the institutional level (state health departments licensing hospitals). In others, the institutional-level authority requires statutory expansion or operates through accreditation bodies (the Joint Commission in medicine). In still others, the authority operates only at the individual practitioner level and the institutional-level binding must be constructed through practitioner-level conditions that aggregate to institutional effects (state bar associations binding individual attorneys, which collectively constrain firm behavior).

Continuing-licensure audit. Licensed institutions or practitioners must periodically demonstrate continued compliance with the conditions for licensure. Audit mechanisms can include random sampling, complaint-driven investigation, and pattern-recognition based on aggregated reporting data. Audit infrastructure exists in distributed form across regulated professions but does not currently include AI-deployment-pattern monitoring at the scope licensing-as-architecture would require; this is one of the specific authority-extension problems.

De-licensure for non-compliance. Licensed institutions or practitioners that fail to satisfy the conditions lose licensure. The institution that has redefined its labor practices to evade the licensure conditions has redefined itself out of being a licensed institution. De-licensure authority exists in distributed form across regulated professions and is among the most powerful enforcement mechanisms available; it is also the least frequently used, because the institutional cost of de-licensure (to both the licensing body and the licensed institution) creates strong pressure toward less drastic interventions.

Consider how the three mechanisms would compose in practice for a state medical board implementing conditional licensure for AI-assisted diagnostics. The trigger condition: any institution deploying AI for diagnostic triage must log contemporaneous physician review for a specified fraction of outputs. Compliance evidence is generated inside the clinical workflow (timestamped attestation linked to the patient record, not a separate compliance report). The audit surface: the board conducts random, unannounced sampling of logged cases on a defined schedule, plus complaint-driven review. If sampling reveals systematic attestation without substantive review, the board escalates: first, mandated workflow correction and reporting; second, conditional licensure restrictions limiting AI deployment scope; third, de-licensure of the institution's AI-deployment authorization if the pattern persists. The sequence does not prescribe how the AI is used; it enforces what must be present for the work to remain licensed medical care.

To avoid the ceremonial-governance failure documented in SI-WP-008 (Gantz, 2026b), audit mechanisms must be designed so that the evidence of compliance is generated in the course of the work itself, not produced afterward for the regulator. Random, unannounced case-review sampling, and separation of observation from adjudication, are minimum design requirements. The architecture must be operationally auditable without requiring institutions to manufacture compliance evidence as a separate workflow; if compliance evidence is separable from the work, the work and the evidence will diverge under operational pressure, and licensing-as-architecture itself becomes ceremonial.

These three mechanisms operate within authority-adjacent extension of existing licensing infrastructure. They do not require construction of new regulatory bodies. They do require the licensing infrastructure to extend its authority and operating practices to encompass AI deployment patterns, which is an extension against an existing institutional base rather than greenfield construction.

The pressure does not currently exist at the scope required, but it is structurally available. Its emergence requires sustained engagement by practitioners, sustained public attention to deployment-layer harms, and sustained pressure on the political principals to whom licensing bodies are accountable.

6. Disconfirmation Criteria and Counterarguments

This paper is falsifiable. Three classes of falsification are specified.

6.1 Local Counterexamples

A licensed institution that has progressively excluded substantive practitioner engagement from AI-mediated work product, that has been licensed by its licensing body throughout the exclusion, and that has produced demonstrably equivalent or superior outcomes by the metrics the licensing framework was designed to protect, would constitute a local counterexample to this paper's argument. The counterexample would not falsify the structural claim but would specify the conditions under which the structural claim does not apply.

6.2 Domain-Level Counterpatterns

A regulated profession in which licensing-as-architecture has been deployed (conditional licensure, audit, de-licensure for non-compliance) and in which the labor-layer extraction pattern documented in this paper has nevertheless continued at the rate that prevailed before deployment, would constitute a domain-level counterpattern. The counterpattern would falsify the claim that licensing-as-architecture binds the function-definition. The claim's prediction is that the deployment of licensing-as-architecture produces measurable reductions in the rate at which the labor-layer extraction pattern occurs in the affected domain. Indicative measurement categories include the rate of AI-linked role elimination, pay-cut reassignment, contractor substitution, substantive-practitioner exclusion, ceremonial sign-off, and documented practitioner-displacement events. The categories vary in their natural unit of measurement: role elimination and pay-cut reassignment are countable per licensed institution per reporting period; substantive-practitioner exclusion and ceremonial sign-off are patterns within workflows that require domain-specific operationalization to count meaningfully. The operational specification of falsification-ready indicators per profession is deferred to subsequent practitioner-guide work and is not developed at scope in this paper. What is specified here is the kind of indicator that would constitute domain-level falsification; the per-profession operationalization that would make those indicators measurable in practice is a separate research artifact.

6.3 Field-Level Counterpatterns

The field-level prediction is that under continued absence of binding architecture at the function-definition layer, the labor-layer extraction pattern will continue and intensify; under deployment of licensing-as-architecture in regulated professions, the pattern will be reduced in those professions; and under absence of equivalent binding mechanisms for unregulated supply-chain labor, the pattern will continue and intensify there even as it is reduced in regulated professions. A field-level pattern in which the labor-layer extraction reduces across all layers (regulated and unregulated) absent binding architecture, or intensifies in regulated professions despite licensing-as-architecture deployment, would falsify the field-level structural argument. A field-level counterpattern would require durable reduction across at least two reporting cycles, not a temporary reduction following recession, hiring freeze, reputational scandal, litigation shock, or one-time regulatory intervention.

6.4 The Strongest Counterarguments

Three classes of counterargument deserve direct engagement.

The regulatory-capture argument. Critics may argue that licensing bodies are captured by the institutions they regulate, and that pressuring them to act is futile because they will act in the interests of their licensees rather than in the public interest. This argument is structurally serious; regulatory capture is real and licensing bodies are not exempt. The response is that regulatory capture operates on a continuum, and the institutional design of licensing bodies (with their boards, public-interest mandates, and accountability to political principals) provides resistance to capture that is greater than the resistance available in the private sector or in newly-constructed regulatory bodies. Licensing-as-architecture is more vulnerable to capture than independent expert review and less vulnerable to capture than corporate self-regulation. Historical precedent confirms that licensing bodies can expand authority under sustained pressure: state medical boards tightened opioid prescribing and pain-clinic licensing following the overdose crisis; state accountancy boards and the PCAOB expanded audit-independence and quality-control requirements post-Enron; several state bars adopted mandatory trust-account auditing and technology-competence rules after public and practitioner pressure. These expansions were not automatic, but they demonstrate that authority-adjacent bodies can move when professional, public, and liability pressure align. Among the available routes, it is closer to the resistant end than to the captured end. The architecture's resistance to capture depends on specific design features (transparency in conditional-licensure conditions, separation between observation and adjudication consistent with SM-011's monitoring architecture, public-comment requirements in licensing rule-making, accountability to political principals through state legislatures and attorneys general). The operational specification of these anti-capture features is deferred to subsequent practitioner-guide work and is not developed at scope in this paper. What this paper claims is that the anti-capture features are architecturally available, not that they are operationally specified at deployment-ready resolution.

The cross-profession-non-transferability argument. Critics may argue that the medical case is special (because patient harm is concentrated and visible), and that the analysis does not generalize to other professions where harm is more distributed and less visible. This argument deserves engagement. The medical case has features (concentrated catastrophic harm, well-developed liability structures, strong professional identity) that strengthen the licensing-as-architecture argument in medicine specifically. Other professions (law, engineering, accounting) have different feature profiles. The cross-profession structural argument is that the function-definition mechanism operates similarly across regulated professions, even where the harm patterns differ. The operational specification per profession is left to subsequent practitioner-guide work and is not developed in this paper.

The timing argument. Critics may argue that even licensing operates too slowly: the labor-layer extraction is happening now, and licensing-as-architecture would take years to deploy at the scope required. This argument is correct that licensing operates on slower timelines than ideal. The structural claim of the paper is comparative: licensing operates faster than the alternative routes assessed in Section 4 because it is authority-adjacent to the function-definition layer rather than authority-misaligned leverage requiring construction of new authority over the function. The claim is not that licensing is fast in absolute terms; the claim is that of the available routes, licensing is the fastest. The argument that "even the fastest route is too slow" is a different argument than the comparative-feasibility argument this paper makes. If true, it would mean no available route is sufficient on the relevant timeline. The paper does not foreclose this possibility.

7. Closing

The pattern documented in Section 1 is not a future risk. It is happening now to specific named workers at specific named companies. Jill at Oracle. Cynthia Sloan. Zhou. The 22% of the data workers surveyed who have experienced homelessness while training the systems that the AI companies are deploying. The cases above are documentation, not illustration. The structural mechanism this paper diagnoses is what is producing the documented record, and the architecture this paper specifies is what could bind against the mechanism if the actors with existing or expandable authority chose to use or extend it.

The actors with existing or expandable authority have not yet chosen to use or extend it. The professional pressure, public pressure, and liability pressure that would produce the will to act do not yet exist at the scope required. This paper is part of the documentation that those pressures use. It is not the pressure itself.

The paper's prescription is partial by design. Licensing-as-architecture can bind the regulated-profession subset of labor-layer extraction because professional regulators already possess authority over the functions being claimed. It does not, as currently specified, protect the unregulated data-work supply chain. The persistence of that unregulated layer is not a failure of the licensing argument; it is a boundary condition of the architecture and a design requirement for subsequent work.

Binding the unregulated layer would require architecture that operates where licensing cannot: vendor-level licensing or fiduciary requirements for AI data-work platforms, procurement-linked labor standards that condition enterprise AI purchases on supply-chain compensation and transparency floors, or platform-level duty-of-care frameworks that treat training-data labor as constitutive infrastructure rather than disposable contracting. These routes lack the authority-adjacency that makes licensing tractable for regulated professions, which is precisely why the unregulated layer remains the harder binding problem and requires separate architectural development.

What this paper does and does not do:

This paper does not predict that licensing-as-architecture will be deployed. It specifies what would have to be true for it to be. The authority-extension pressure problem is the binding constraint, and the constraint may not close.

This paper does not claim its prescription is empirically validated. The empirical strength of the documentation in Section 1 is asymmetric to the empirical strength of the prescription in Sections 4 and 5. The prescription is structurally specified and architecturally tractable. It has not been deployed at scope. The deployment-feasibility claims in Section 4 are pre-deployment analysis.

This paper does not provide implementation specifications detailed enough to deploy without further work. The cross-profession structural argument is at scope; the operational specification per profession is left to subsequent practitioner-guide work.

This paper does not claim the architecture is sufficient on its own. SI-WP-009's design requirement specifies that interventions must bind at all three levels in mutually reinforcing form. Licensing-as-architecture is one component of a sufficient intervention, not the whole of it. And it binds only the regulated-profession subset of the labor-layer extraction pattern.

This paper does not adjudicate the motivations of any specific actor. The structural mechanism it diagnoses operates regardless of what any individual actor intends. The actors deploying AI systems at scale are operating inside the optimization gradient that produces the labor-layer extraction. None of them needs to be acting in bad faith for the pattern to operate. The structural conditions select for the pattern regardless of individual intent. What the structural conditions select for is captured by the documented record: substantial sums freed by rounds of layoffs at major companies routed into infrastructure that further reduces the need for the workers extracted in those rounds (TD Cowen analysis as reported in Chow, 2026, estimating $8 to $10 billion freed by Oracle's planned reductions); $22,620 median annual earnings for the data workers training the systems that will eventually not need them; recorded workflows replacing the engineers who recorded them. The pattern is what the gradient produces. The gradient is what the present incentive architecture rewards. Whether any specific actor wants the pattern is downstream of the structural analysis and outside the scope of this paper to adjudicate.

This paper does not specify the long-horizon trajectory of the gradient. The trajectory does not stop at the labor layer. SI-WP-009 develops the structural consequence at length: the optimization logic that removes inconvenient humans does not contain an ownership exception, and the actors operating inside the gradient at the present moment are operating on a timeline that does not exempt them from it (Gantz, 2026c, Section 5).

What this paper does:

It documents the labor-layer extraction pattern with primary sources. It diagnoses the function-redefinition mechanism producing the pattern. It engages the strongest alternative explanations and industry counteraccounts and shows that the layer-specific claim survives engagement with them. It identifies the function-definition layer as the binding layer at which intervention can hold, and clarifies that licensing binds authorization to perform regulated professional functions rather than employment as such. It assesses each of the four routes SI-WP-009 Section 6.3.2 names plus licensing-as-architecture as a fifth route, and argues that licensing-as-architecture is the most feasible on the relevant timeline because it is authority-adjacent to the function-definition layer rather than authority-misaligned leverage requiring construction of new authority over the function. It specifies what would close the authority-extension pressure problem that prevents the architecture from being deployed. It is explicit that the prescription binds the regulated-profession subset and that the unregulated supply-chain layer requires additional architecture.

The paper produces vocabulary for naming what is happening at the labor layer of AI development at the structural level. The actors who will eventually use that vocabulary, if any of them do, are not the author of this paper. They are professional regulators, journalists, practitioners inside regulated professions, public-interest advocates, and the political principals to whom licensing bodies are accountable. The vocabulary exists in this paper because someone needed to put it there for them to use.

Whether the actors with existing or expandable authority use or extend the authority is the question this paper poses and does not answer. The question is open. The answer will be visible in retrospect.

Document Dependencies

Prerequisites: SI-WP-007 (The Human Accountability Problem in Relational AI Deployment) for the application of bounded-rationality framework to institutional actors that explains why existing institutional actors satisfice rather than use available authority; SI-WP-008 (Ceremonial Governance Is Lethal) for the diagnosis of the deployment-layer failure mode in medicine that licensing-as-architecture would address; SI-WP-009 (The Extraction Trajectory) for the three-level architecture, the function-definition layer framing, and the four-route taxonomy this paper's comparative-feasibility argument extends with a fifth route.

Post-requisites: Per-profession Practitioner Guides (PG-series, future) developing operational specification for specific professional regulator audiences. The white paper does the cross-profession structural argument; the practitioner guides do the operational depth per profession.

Scale: Level 3 (primary, governance architecture for regulated professions at the field level), with Level 1 and Level 2 grounding from the operational specifications per profession.

References

Document: SI-WP-010 White Paper Series
Version: v2.7.3
Author: Thomas W. Gantz
Affiliation: Synthience Institute
Date: May 2026
License: CC-BY 4.0