
Over the past few years, conversations about AI and work have tended to swing between two extremes. On the one hand, AI is often framed as an unstoppable force that will eliminate large categories of jobs. On the other hand, it is treated as just another tool, one that may change some workflows but leave the labor market largely intact.
As is often the case, reality appears to be more complicated than either narrative suggests.
A recent research report from Anthropic offers a useful example of why we need more nuanced conversations about the market impacts of AI. Rather than asking only what AI can theoretically do, the report looks at where AI is actually being used in real-world work contexts and how that use may connect to broader labor market patterns. The findings are important not just for economists or policymakers, but also for technical communicators, whose work increasingly sits at the intersection of technology, labor, and organizational change.
For those of us in technical communication, the question is not simply whether AI will affect work. It already is. The better question is this: how can technical communicators help organizations respond to the market impacts of AI in ways that remain ethical, practical, and centered on human needs?
Moving Beyond Speculation about the Market Impacts of AI
One of the most useful contributions of Anthropic’s report is that it pushes back on purely speculative claims about AI-driven disruption. The authors introduce a measure they call “observed exposure,” which combines theoretical AI capability with real-world usage data, giving more weight to work-related and automated uses. In other words, they are not only asking whether an LLM could speed up a task. They are asking whether that task is actually being carried out with AI in practice.
That distinction matters.
For technical communicators, this way of thinking should sound familiar. We already know that the difference between what a technology can do and what people actually do with it is often where the most important design and communication challenges emerge. A tool may be technically capable of transforming a workflow, but that does not mean users trust it, understand it, or know how to integrate it responsibly.
In that sense, the market impacts of AI are not just about capability. They are about adoption, interpretation, policy, and use.
This is where technical communicators have an important role to play. We are trained to understand how complex systems are communicated, implemented, and experienced by real people in real contexts. That perspective is especially important now, when many organizations are making decisions about AI based more on fear or enthusiasm than on evidence.
What the Research Suggests so Far
Anthropic’s findings are measured, but still significant.
The report argues that AI remains far from its full theoretical capability in the labor market. It also finds that occupations with higher observed exposure are projected by the U.S. Bureau of Labor Statistics to grow somewhat less through 2034. At the same time, the researchers do not find a systematic increase in unemployment among highly exposed workers since late 2022, although they do identify suggestive evidence that hiring may have slowed for younger workers in more exposed occupations.
These findings are worth sitting with for a moment.
They do not support the most dramatic predictions about immediate mass displacement. But they also do not support complacency. Instead, they suggest that the market impacts of AI may be emerging gradually, unevenly, and in ways that are not always captured by headline narratives. Work may be reorganized before workers are fully displaced. Hiring patterns may shift before unemployment spikes. Some tasks may be automated while others become more valuable because they require judgment, coordination, or accountability.
This kind of uneven change is precisely the kind of change technical communicators should be paying attention to.
Why Technical Communicators Should Care
It might be tempting to assume that labor market analysis sits outside the concerns of technical communication. But I would argue the opposite.
Technical communicators are often among the people asked to make new technologies legible within organizations. We document systems, translate complexity, support adoption, design content for users, and increasingly contribute to content operations, governance, and workflow design. When AI enters a workplace, technical communicators are often involved, whether formally or informally, in helping people understand what that means.
That means the market impacts of AI are also communication impacts.
When organizations adopt AI tools, workers need to know which tasks are changing, which expectations are changing, who remains accountable, and where human judgment still matters. They need guidance, not just tools. They need communication structures that reduce confusion rather than amplify it.
This is especially true when AI is introduced as a solution to productivity problems. Productivity gains can easily become organizational confusion if roles are not redefined, workflows are not updated, and assumptions about responsibility are not made visible. In many cases, what looks like technological innovation is actually a documentation, training, and governance problem waiting to happen.
Technical communicators are uniquely positioned to address those problems.
Advocating for Human-Centered Work
If we take the market impacts of AI seriously, then technical communicators need to do more than react. We need to advocate.
By advocacy, I do not mean rejecting AI outright. Nor do I mean embracing it uncritically. I mean helping organizations make better decisions about how AI is implemented and how workers are supported through change.
That advocacy can take several forms.
First, technical communicators can help organizations distinguish between augmentation and automation. Anthropic’s report explicitly weights automated uses more heavily than augmentative uses, which is a useful reminder that not all AI adoption has the same labor implications. A tool that helps a writer brainstorm is different from a workflow that removes a writer from the process entirely.
Second, technical communicators can make accountability visible. One of the recurring risks in AI-enabled environments is that responsibility becomes harder to trace. When content is generated, summarized, transformed, or routed through AI systems, it becomes even more important to document who reviews outputs, who approves them, and who is responsible when things go wrong.
Third, technical communicators can advocate for design processes that include workers, not just managers or vendors. If the market impacts of AI are going to be experienced through everyday changes to tasks and roles, then workers need a voice in how those changes are implemented. Human-centered work requires participatory approaches, not top-down adoption models.
Finally, technical communicators can help organizations resist the urge to treat efficiency as the only metric that matters. Efficiency matters, of course. But so do clarity, trust, inclusion, quality, and sustainability. AI may accelerate some tasks, but if it degrades decision-making or erodes user trust, then the long-term costs may outweigh the short-term gains.
A Leadership Opportunity for the Field
What I find most compelling about the current moment is that it creates an opportunity for technical communicators to lead.
Too often, discussions about AI in the workplace are framed as technical questions for engineers or strategic questions for executives. But the implementation of AI is also a rhetorical, ethical, and organizational question. It is about how work gets described, valued, divided, and managed. Those are questions technical communicators are well-equipped to address.
If we want to respond thoughtfully to the market impacts of AI, then we need people who can connect systems to users, policy to practice, and innovation to lived experience. We need people who can ask not only what a technology can do, but what it should do, for whom, and under what conditions.
That is leadership work.
And it is work that technical communicators should be doing now.
Where We Go from Here
Anthropic’s report does not offer a final answer on the market impacts of AI, and to its credit, it does not pretend to. The authors emphasize that the evidence is still early and that their framework is meant to help identify changes as they emerge.
For technical communicators, that uncertainty should not be paralyzing. It should be clarifying.
We do not need to wait for perfect consensus before acting. We can begin now by asking better questions in our organizations. Which tasks are actually changing? Which workers are most affected? Where is AI augmenting work, and where is it replacing it? What new forms of documentation, governance, and training are needed? How do we ensure that efficiency gains do not come at the expense of accountability or human agency?
These are not peripheral questions. They are central to how work will be shaped in the years ahead.
And if technical communicators are willing to step into that conversation, we can do more than interpret the market impacts of AI. We can help shape them.