The Workplace of the Future – The Economist – Medium
As artificial intelligence pushes beyond the tech industry, work could become fairer — or more oppressive
Artificial intelligence (AI) is barging its way into business. As our special report this week explains, firms of all types are harnessing AI to forecast demand, hire workers and deal with customers. In 2017 companies spent around $22bn on AI-related mergers and acquisitions, about 26 times more than in 2015. The McKinsey Global Institute, a think-tank within a consultancy, reckons that just applying AI to marketing, sales and supply chains could create economic value, including profits and efficiencies, of $2.7trn over the next 20 years. Google’s boss has gone so far as to declare that AI will do more for humanity than fire or electricity.
Such grandiose forecasts kindle anxiety as well as hope. Many fret that AI could destroy jobs faster than it creates them. Barriers to entry from owning and generating data could lead to a handful of dominant firms in every industry.
Surveillance at work is nothing new. Factory workers have long clocked in and out; bosses can already see what idle workers do on their computers. But AI makes ubiquitous surveillance worthwhile, because every bit of data is potentially valuable. Few laws govern how data are collected at work, and many employees unguardedly consent to surveillance when they sign their employment contract. Where does all this lead?
Trust and telescreens
Start with the benefits. AI ought to improve productivity. Humanyze merges data from its badges with employees’ calendars and e-mails to work out, say, whether office layouts favour teamwork. Slack, a workplace messaging app, helps managers assess how quickly employees accomplish tasks. Companies will see when workers are not just dozing off but also misbehaving. They are starting to use AI to screen for anomalies in expense claims, flagging receipts from odd hours of the night more efficiently than a carbon-based beancounter can.
Employees will gain, too. Thanks to strides in computer vision, AI can check that workers are wearing safety gear and that no one has been harmed on the factory floor. Some will appreciate more feedback on their work and welcome a sense of how to do better. Cogito, a startup, has designed AI-enhanced software that listens to customer-service calls and assigns an “empathy score” based on how compassionate agents are and how fast and how capably they settle complaints.
Machines can help ensure that pay rises and promotions go to those who deserve them. That starts with hiring. People often have biases but algorithms, if designed correctly, can be more impartial. Software can flag patterns that people might miss. Textio, a startup that uses AI to improve job descriptions, has found that women are likelier to respond to a job that mentions “developing” a team rather than “managing” one. Algorithms will pick up differences in pay between genders and races, as well as sexual harassment and racism that human managers consciously or unconsciously overlook.
Yet AI’s benefits will come with many potential drawbacks. Algorithms may not be free of the biases of their programmers. They can also have unintended consequences. The length of a commute may predict whether an employee will quit a job, but this focus may inadvertently harm poorer applicants. Older staff might work more slowly than younger ones and could risk losing their positions if all AI looks for is productivity.
And surveillance may feel Orwellian — a sensitive matter now that people have begun to question how much Facebook and other tech giants know about their private lives. Companies are starting to monitor how much time employees spend on breaks. Veriato, a software firm, goes so far as to track and log every keystroke employees make on their computers in order to gauge how committed they are to their company. Firms can use AI to sift through not just employees’ professional communications but their social-media profiles, too. The clue is in Slack’s name, which stands for “searchable log of all conversation and knowledge”.
Tracking the trackers
Some people are better placed than others to stop employers going too far. If your skills are in demand, you are more likely to be able to resist than if you are easy to replace. Paid-by-the-hour workers in low-wage industries such as retailing will be especially vulnerable. That could fuel a resurgence of labour unions seeking to represent employees’ interests and to set norms. Even then, the choice in some jobs will be between being replaced by a robot or being treated like one.
As regulators and employers weigh the pros and cons of AI in the workplace, three principles ought to guide its spread. First, data should be anonymised where possible. Microsoft, for example, has a product that shows individuals how they manage their time in the office, but gives managers information only in aggregated form. Second, the use of AI ought to be transparent. Employees should be told what technologies are being used in their workplaces and which data are being gathered. As a matter of routine, algorithms used by firms to hire, fire and promote should be tested for bias and unintended consequences. Last, countries should let individuals request their own data, whether they are ex-workers wishing to contest a dismissal or jobseekers hoping to demonstrate their ability to prospective employers.
The march of AI into the workplace calls for trade-offs between privacy and performance. A fairer, more productive workforce is a prize worth having, but not if it shackles and dehumanises employees. Striking a balance will require thought, a willingness for both employers and employees to adapt, and a strong dose of humanity.
This article appeared in the Leaders section of the print edition under the headline “AI-spy”