We Need More Worker Voice When Implementing AI

Employers can enact these strategies to ensure that workplace AI is a win-win for business and workers.
Blog Post
A microphone in a stadium.
April 30, 2024

This article was produced as part of New America's Initiative on the Future of Work and the Innovation Economy. Share this article and your thoughts with us on Twitter/X, Facebook, and LinkedIn, and subscribe to our Future of Work Updates & Events newsletter to stay current on our latest.

Artificial intelligence (AI) is sweeping the American imagination. But in many workplaces, the thinking goes that only a manager gets to decide how and when to start using AI, while everyone else only gets to decide whether they want to “buy in” to the company’s strategy. One survey found that 78 percent of corporate officers said their companies were already using AI, but 54 percent of employees reported having “no idea” how their companies were doing so.

In a new report, Worker Power and Voice in the AI Response, researchers from Harvard Law School’s Center for Labor and a Just Economy urge decision-makers to consider AI transitions in the workplace from a new perspective. They conclude that “Workers must be included in the regulation of and decisions regarding how AI is deployed within an enterprise.”

The report’s goal is to lay out a vision of “bold, transformative change to give everyone a voice in building a society in which workers, their families, and communities can prosper.” That sentiment has been championed by labor advocates have also championed that sentiment as have industry conveners such as the World Economic Forum through a partnership with New America.

Types of AI and Uses of AI in the Workplace

The report defines six common workplace AI uses. The two main uses show that AI is affecting not just workers but also managers.

The first and most common form is algorithmic management, which is when managers use AI to manage, monitor, and control various aspects of work, tasks, or processes. For example, during the workday, AI measures employee productivity toward goals, monitors employees’ communications and time management, and looks for employee risks; all the while, it reports “real-time updates” to managers. After the workers finish a task, there are AI systems that provide suggestions for improvement and even AIs that can automatically fire them.

The second is AI-driven workplace surveillance. In contrast to algorithmic management, which involves using AI to make management decision outright, workplace surveillance involves managers using AI technology to monitor their employees’ movements, typing speed, and other forms of surveillance that could infringe employee rights to privacy, work-life balance, and personal boundaries. For example, Amazon has made delivery drivers sign “biometric consent forms” that allow the company to track not only whether they are speeding or staying on their delivery routes but also whether they ever drink coffee or yawn while driving. Meanwhile, some Amazon warehouse workers must wear patented watches that track workers’ hand speed as they process packages.

To be clear, not all workplace AI uses are invasive or demanding. Business leaders are developing frameworks for onboarding AI in ways that take pressure off workers, help them get more done, and even encourage companies to hire more workers. Economists Daron Acemoglu and Simon Johnson call this best-case scenario the “productivity bandwagon” in their bestselling 2023 book Power and Progress because when it happens technology helps the workers, on average, get more done. But it is not automatic. Technology, they warn, tends to be controlled by titans whose visions of the future do not always grasp average workers’ wants and needs. To achieve technology’s potential, policymakers need to let those workers play a role in tech’s path.

Recommendations for AI Workplace Implementation Plans that Respect Worker Voice

1. Elect "AI monitors" in every workplace in which AI is deployed

The report’s first recommendation is to mandate AI monitors elected by workers in every workplace where AI is used. The authors believe that elected AI monitors can help workers better understand how AI will affect their workplaces. After receiving training in fundamental AI concepts, the monitors would gather accurate information about AI, workers’ legal rights, and help workers with reporting and whistleblowing on these issues.

Outside each individual workplace, monitors would meet and collaborate with other monitors from the same industry and area, coordinate with government regulators, and participate in sectoral labor boards along with businesses, regulators, and unions.

2. Require Companies to be Transparent about their AI Uses

Harvard’s report calls for two transparency policies: first, companies should have to share information on their AI policies in plain language that an average worker can understand; second, that employers should report surveillance technology on “persuader” reports, which are legally-mandated reports detailing employer efforts to stop employee unionization.

While more work is needed, a number of cities and states have taken steps to require or encourage employers to be transparent about when and how they use AI in the workplace. Meanwhile, last year, many state legislatures introduced and passed laws around AI transparency.

3. Expand Workplace Penalties and Protections

Finally, the report calls for changes to labor law, governing union activities, and employment law, governing workplace safety and wages.

Federally, the authors call for changes under the National Labor Relations Act. First, the report argues that putting anti-union messaging on company software and technology is nothing more than a high-tech “captive audience meeting,” in which employers require employees to listen to anti-union propaganda. The authors also call for a change to Supreme Court precedent, which they believe stops unions from bargaining with employers before employers implement AI in the workplace.

Harvard also calls for changes to employment law. First, the report calls to amend the Occupational Safety and Health Act so that “the right to a ‘safe and healthful workplace’ . . . includes the right to be free from harms caused by AI in the workplace.” Meanwhile, states and the federal government should continue to update the definition of “employee” so that companies cannot avoid labor and employment laws by misclassifying workers as “independent contractors.”

Workers and Managers Need New AI Regulations

Harvard’s report reminds us how little AI workplace law currently exists. America’s workplace law used to match America’s economy, with specific wage and workplace safety laws tailored to bakeries, factories, and farms, and a labor law that allowed unions to bargain with bosses at the level of the individual workplace.

Then labor started to struggle, and America’s labor policies stopped growing or at least stopped being enforced effectively. At the same time, the computing power of microchips started to double every two years, changing the lives of white-collar and blue-collar workers. Fifty years in, workplace law no longer protects the way millions of Americans work.

America’s labor policies create burdens, frictions, and uncertainties for businesses too. Old laws create rigid rules that limit companies’ abilities to innovate. With no voice for employees in the AI transition, the result is what we see now: workers fearing for their jobs and bosses trying to guess the right AI moves without much feedback from employees.

The future is here, promising a whole world of creativity and productivity, and yet workers and bosses are both worried. The Harvard report offers one solution: a guaranteed place for workers in AI strategy.