WASHINGTON — The hearing room told one story. The policy memo tells another. Put them together, and the shape of the thing becomes impossible to ignore.
At the House Education and Workforce Committee, Bradford Kelley waved away concerns that artificial intelligence is already being used to retaliate against workers who organize. No cases, he said. No evidence. Nothing but abstractions and vibes. When workers asked for “a seat at the table,” he treated the phrase like a punchline.
Outside that room, the public record has been filling up.
The National Labor Relations Board warned more than two years ago that electronic surveillance and algorithmic management systems can interfere with workers’ Section 7 rights. The Equal Employment Opportunity Commission followed with its own warning that biometric tracking and wearable surveillance tools can enable discrimination and retaliation. In Missouri, workers at Amazon filed unfair labor practice charges alleging that constant algorithmic monitoring chills organizing and protected concerted activity.
Those are not thought experiments. They are warnings, filings, and enforcement signals from the federal government and workers already living under these systems.
Then came the Trump Administration’s AI Action Plan.
On paper, it is optimistic, glossy, and relentlessly upbeat. In a Department of Labor piece authored by Deputy Secretary Keith Sonderling, AI is framed as an opportunity problem, not a power problem. The risk, readers are told, is not job loss but the speed of change. The solution is agility. Retraining. Talent pipelines. Innovation hubs. Rapid feedback loops. Industry partnerships.
Workers appear throughout the document, but only as objects to be moved faster.
What is striking is not what the plan says, but what it never pauses to ask. Who decides how AI is deployed inside workplaces? Who governs systems that track keystrokes, monitor movement, score productivity, flag “risk,” or recommend discipline? Who gets veto power when an algorithm quietly replaces human judgment with metrics no one on the shop floor can see or challenge?
Those questions do not fit neatly into an agility framework. Agility assumes the direction is already set.
This is where Kelley’s contempt makes sense. If AI is defined solely as a competitiveness tool, then worker participation looks like friction. If leadership is measured by speed, then deliberation becomes delay. If dominance is the goal, consent is optional.
The administration’s plan emphasizes aligning federal agencies and partnering with industry. It does not meaningfully address giving workers governance authority over systems that directly shape their pay, schedules, discipline, and ability to organize. The future is something workers are expected to adapt to, not help design.
That puts the hearing in a different light. When Kelley asks what “a seat at the table” even means, he is not confused. He is speaking from within a policy world that has already decided the table belongs to someone else.
Workers, regulators, and public records point to a present in which AI is already being used to surveil, score, and suppress. The administration is planning for a future where workers must move faster to accommodate those systems. The gap between those two realities is not theoretical. It is the conflict.
Call it leadership if you want (it’s not). Call it agility. Call it dominance. But do not call it neutral.
Eyes open. Voices loud.






Leave a Reply