Robots Are Coming For Many Jobs, Including Journalists

From a Wall Street Journal story by Greg Ip headlined “The Robots Have Finally Come for My Job”:

For centuries, new waves of automation have been greeted by predictions of widespread job loss and convulsive disruption. For centuries, the predictions have been wrong.

Could artificial intelligence be different? The weight of history says no. The revolutionary character of ChatGPT begs us to reconsider.

AI has been seeping into our lives for years now, such as completing our sentences in emails and web searches. Yet going from those iterations to “generative AI” such as ChatGPT is like going from dynamic cruise control to full self-driving. ChatGPT can answer questions in ways we thought were the exclusive preserve of humans, more quickly and cheaply.

A handful of experiments point to the astonishing potential of generative AI to replace workers. With ChatGPT, professionals such as grant writers, data analysts and human-resource professionals were able to produce news releases, short reports and emails in 37% less time, 10 minutes less on average, and with superior results, according to a study by Shakked Noy and Whitney Zhang, doctoral students at the Massachusetts Institute of Technology.

In a separate experiment by Microsoft Corp. researcher Sida Peng and three co-authors, programmers using a tool based on a model developed, like ChatGPT, by the startup OpenAI cut the time required to program a web server by more than half.

These are game-changing results. Goldman Sachs Group Inc. economists, generalizing to the entire economy, conclude generative AI could raise labor-productivity growth—the building block for economic growth—by almost 1.5 percentage points a year, a de facto doubling from its current rate.

Higher productivity means that some workers, or types of work, will no longer be needed. Automation has been displacing labor continuously for centuries, of course, but historically took its toll on routine, repetitive work. Generative AI by contrast hits well-paid college-educated professionals right in their human capital.

Open AI and University of Pennsylvania researchers asked a team of humans and a ChatGPT-like model to evaluate which occupations were most exposed to generative AI. Some jobs—such as dishwashers, motorcycle mechanics and short-order cooks—were deemed to have no exposure. The most vulnerable occupations included mathematicians, interpreters and web designers. Some 19% of all workers could see at least half their tasks affected, the study concluded. Among the occupations potentially 100% exposed: journalist.

To paraphrase the old saying about recessions and depressions, technological disruption is when your neighbor is automated out of a job; the robot apocalypse is when you are automated out of a job. Professionals, including people who write columns for a living, now know the fear of obsolescence that has stalked blue-collar workers for generations. It might be a coincidence, but layoffs these days are falling more on the former than the latter.

Naturally, there are caveats. ChatGPT makes mistakes: It directed me to a nonexistent study while I researched this column. But whether ChatGPT is “right” misses the point.

“What a large language model is trying to do is not to provide correct answers, but pleasing answers,” said Jim Manzi, a partner at Foundry.ai, which develops AI applications for business. “Its job is to anthropomorphize, to give answers people like.”

All AI does this: an algorithm designed to find a photo of a dog is trained on pictures that humans say look like dogs; it isn’t, in an objective sense, “right.”

This is a good reason not to treat anything ChatGPT tells you as objective truth. (Science fiction legend Isaac Asimov anticipated that. In his 1941 short story “Liar!,” a telepathic robot tells people things that aren’t true to avoid hurting their feelings, with tragic results.)

But the same is true for a lot of what humans say. A legal or medical opinion, a college-course syllabus, or a column in a newspaper aren’t objectively right or wrong, so why shouldn’t an AI come close? And with time, ChatGPT is going to make fewer factual errors. The latest version reportedly scores 150 points higher on the SAT than the previous version. To address its atrocious performance on basic algebra, some versions come with a specialized math chatbot plug-in.

True, AI could be dangerous. It might mislead people, spread or amplify divisive or hateful speech or take decisions away from humans. But all innovations come with negatives; in only a few cases such as cryptocurrency and fentanyl do they eclipse the benefits. In any case, such concerns aren’t likely to slow development when there is so much money to be made and China is racing ahead.

So there are lots of reasons ChatGPT could wipe out more jobs than past innovations. And yet, the preponderance of evidence still points in the other direction.

Predictions of technology’s labor-market impacts are notoriously flawed. Experiments like those involving AI often fail to replicate in the real world. Nearly two decades ago, the advent of international fiber-optic connections led some scholars to estimate a fifth of U.S. jobs, such as radiologist, could be offshored. Nothing even close to that happened. A decade ago, economists began warning that self-driving trucks would deprive millions of high-school graduates of good-paying jobs. Today, there are more truck drivers than ever and employers are begging for more.

Often, the technology isn’t good enough or human tasks are too complicated to be replaced. Regulation and inertia get in the way, so the impact unfolds over many years and can’t be detected amid countless other forces at work.

Joshua Gans, an economist specializing in AI at the University of Toronto, said: “Technological changes turn something that was scarce into something that is abundant,” and in the process, “reveal to us what the real value of that stuff is.” Journalists’ greatest value, he said, will be in asking good questions and judging the quality of the answers, not writing up the results.

Spreadsheets made math-intensive analysis easy and cheap, and as a result, led to the creation of countless new tasks and occupations. Large language models could similarly lead to an explosion in applications requiring the synthesis of large amounts of information into serviceable prose. Dr. Gans and a colleague are developing a chatbot trained on course-reading materials and lecture transcripts to which students can pose questions as they would a teaching assistant—a job for which the university has little budget.

“Like almost all AI, we’ll see slices of labor get replaced by the machine,” said Mr. Manzi. “As more and more slices of human labor get replaced by machine, the humans have to stay ahead of the machine. Thus far in human history, it has always worked out that way.”

Greg Ip is the chief economics commentator for The Wall Street Journal.

Speak Your Mind

*