Twenty percent of full-time American workers report AI has automated parts of their tasks.
Survey data that jolts the debate
Epoch AI and Ipsos asked 2,000 U.S. Adults about how they use artificial intelligence. Half the respondents said they'd used AI in the previous week for work or personal reasons. Among full-time employees, 20% reported that AI has already taken over tasks they used to do themselves, while 15% said AI added new chores they wouldn't have done otherwise.
Look, those numbers aren't trivial. They sit at the center of a bigger argument about whether AI is mainly augmenting human work or replacing it.
That argument matters because augmentation and displacement pull the labor market in very different directions. Augmentation tends to boost productivity and change job content. Displacement can cut the amount of paid human work available in some roles.
The Epoch AI–Ipsos data suggest displacement may be catching up to, or even outpacing, augmentation.
But the survey is a snapshot — a mix of perception and self-reporting — not a causal ledger of lost jobs. Still, it's one more signal that policymakers and employers are dealing with real shifts in everyday work.
Experts push back and probe the measures
The survey findings landed next to several academic papers and policy reports this month that try to measure AI's labor-market impact more precisely. The Federal Reserve Bank of St. Louis released a report that pairs two indicators: a theoretical exposure metric estimating whether large language models could cut task time by half in certain occupations, and a measure of actual AI adoption based on the Real-Time Population Survey created by Adam Blandin, professor of economics at Vanderbilt University, and Alexander Bick, economic policy advisor at the Federal Reserve Bank of St. Louis.
Point is, those two measures don't line up perfectly. Computational and mathematical occupations looked most exposed in the Fed paper — roughly 80% theoretical exposure — and reported the highest AI adoption rates, about 45%. They also showed the largest rise in unemployment between 2022 and 2025, up about 1.2 percentage points.
But critics warn against reading too much into self-reported adoption. Will Rinehart, senior technology fellow at the American Enterprise Institute, told Reason that survey answers on AI use may not match actual system logs. "In social media research, self-reports of Internet use are only moderately correlated with log file data," Rinehart said. "To know the actual 'actual AI adoption' rate, we need log file usage data from Anthropic and OpenAI."
Real-world stumbles make the headline numbers
Companies that rushed into automation have sometimes had to reverse course. The finance tech firm Klarna reportedly rehired human staff after an 11-month experiment with heavy automation failed to deliver. At Amazon, efforts to replace human roles have at times reduced productivity rather than raising it.
So adoption doesn't always mean improved performance. And adoption plus underperformance doesn't equal jobless doom — it can create new work to monitor, fix, and supervise AI systems.
Some of the Stanford Institute for Human-Centered AI's working-paper findings add texture to the story. Researchers using Anthropic's usage data found that among software developers aged 22 to 25, headcount was nearly 20% lower in July 2025 than at a late-2022 peak. For the highest two exposure quintiles, employment for 22-25 year olds fell about 6% between late 2022 and July 2025. Those patterns echo recent reporting about new grads struggling in certain tech hiring markets.
Still, the Stanford analysis also underlines how concentrated the effects can be. Young software developers and roles with heavy LLM exposure look different from, say, personal service jobs, where both the Fed and other surveys show much lower exposure and adoption.
Where the friction shows up
AI often replaces specific tasks within jobs rather than entire occupations. The Epoch AI–Ipsos poll captured that nuance: 15% of workers said AI introduced new work they hadn't done before. That can mean writing prompts, vetting AI outputs, or running quality checks. It can also mean extra time fixing AI mistakes.
Frankly, the messy middle is where employers, workers, and regulators will spend a lot of energy. Companies want faster workflows and lower costs.
Employees worry about job content and stability. Regulators are trying to decide when to step in — and how quickly.
Nichols Miailhe, AI policy leader at the Global Policy on Artificial Intelligence, framed the stakes plainly. "When one in five workers say AI is already replacing parts of their job, we can start talking about labor market restructuring happening in real time," Miailhe said in an interview with NBC. "The fact that replacement seems to be outpacing augmentation should draw our attention: the policy window to shape how AI transforms work is probably closing faster than most governments realize."
That argument has urgency: if policymakers want to set rules, fund retraining, or adjust safety nets, a later start narrows their options.
Data gaps and what researchers want next
Researchers and critics agree on one thing — better usage data would help. Surveys capture perception and self-reported behavior.
Platform logs would show how often employees actually call on LLMs and other AI tools to finish tasks. Rinehart has pressed for that kind of granular evidence, saying the field needs the raw usage files to validate self-reports.
Look, companies like Anthropic and OpenAI hold a lot of the relevant telemetry. Those firms are private and protection of user data is a real concern. Still, researchers say anonymized logs could answer whether tools are replacing human time or just reshaping it.
That also matters for forecasting. The Federal Reserve Bank of Chicago and academics from several universities have been updating macroeconomic models to account for possible labor-market disruption. But modelers don't agree on how big the hit will be, or how quickly it will arrive.
One practical sign of uncertainty: many employers report using generative AI for hiring, training, and onboarding. Yet employees remain uneasy. Recent industry surveys find almost half of employers say they use generative AI in HR processes, while a large share of workers say they don't expect technology to make work better for them.
So companies are hiring AI tools into HR while workers still fear the consequences. That mismatch could drive friction in labor markets and politics.
What this means for workers and managers
For now, adaptation looks uneven.
Some job categories — coding, data work, and certain customer-service roles — show rapid change. Others, like many in-person service jobs, show far less AI exposure.
That doesn't mean the story's settled. Economists caution that 2022 was an unusual labor year, and comparing to that peak can exaggerate change. At the same time, concentrated losses among young software developers are real enough to shape graduates' career choices.
So where do people focus? Training and oversight.
Employers that want to keep AI productive will have to build processes for prompt engineering, quality control, and human review. Workers need clearer paths to build AI skills and to move into roles where judgment and context matter.
Point is, the headline numbers — 20% of full-time workers reporting task replacement — don't tell the final story. They show a major shift in some places and a modest nudge in others. The challenge is mapping where each is happening and then matching policy and corporate responses.
Related Articles
- An AI opened a real shop in San Francisco with $100,000 — and bungled day-one staffing
- Developers Are Running AI Locally. CISOs Didn't See This Coming.
- Anthropic’s Mythos Prompts Emergency Talks With Banks as Cyber Risk Fears Grow
Epoch AI and Ipsos surveyed 2,000 U.S. Adults and found 20% of full-time workers say AI has already replaced parts of their job.