AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.

From The Wall Street Journal:

Decades after automation began taking and transforming manufacturing jobs, artificial intelligence is coming for the higher-ups in the corporate office.

The list of white-collar layoffs is growing almost daily and includes jobs cuts at Google, Duolingo and UPS in recent weeks. While the total number of jobs directly lost to generative AI remains low, some of these companies and others have linked cuts to new productivity-boosting technologies such as machine learning and other AI applications.

Generative AI could soon upend a much bigger share of white-collar jobs, including middle and high-level managers, according to company consultants and executives. Unlike previous waves of automation technology, generative AI doesn’t just speed up routine tasks or make predictions by recognizing data patterns. It has the power to create content and synthesize ideas—in essence, the kind of knowledge work millions of people now do behind computers.

That includes managerial roles, many of which might never come back, the corporate executives and consultants say. They predict the fast-evolving technology will revamp or replace work now done up and down the corporate ladder in industries ranging from technology to chemicals.

“This wave [of technology] is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” said Andy Challenger, senior vice president of outplacement firm Challenger, Gray & Christmas.

Some of the job cuts taking place already are a direct result of the changes coming from AI. Other companies are cutting jobs to spend more money on the promise of AI and under pressure to operate more efficiently.

Meanwhile, business leaders say AI could affect future head counts in other ways. At chemical company 

Chemours, executives predict they won’t have to recruit as many people in the future.

“As the company grows, we’ll need fewer new hires as opposed to having to do a significant retrenchment,” said Chief Executive Mark E. Newman.

. . . .

Since last May, companies have attributed more than 4,600 job cuts to AI, particularly in media and tech, according to Challenger’s count. The firm estimates the full tally of AI-related job cuts is likely higher, since many companies haven’t explicitly linked cuts to AI adoption in layoff announcements.

Meanwhile, the number of professionals who now use generative AI in their daily work lives has surged. A majority of more than 15,000 workers in fields ranging from financial services to marketing analytics and professional services said they were using the technology at least once a week in late 2023, a sharp jump from May, according to Oliver Wyman Forum, the research arm of management-consulting group Oliver Wyman, which conducted the survey.

Nearly two-thirds of those white-collar workers said their productivity had improved as a result, compared with 54% of blue-collar workers who had incorporated generative AI into their jobs.

Alphabet’s Google last month laid off hundreds of employees in business areas including hardware and internal-software tools as it reins in costs and shifts more investments into AI development. The language-learning software company Duolingo said in the same week that it had cut 10% of its contractors and that AI would replace some of the content creation they had handled.

. . . .

United Parcel Service said that it would cut 12,000 jobs—primarily those of management staff and some contract workers—and that those positions weren’t likely to return even when the package-shipping business picks up again. The company has ramped up its use of machine learning in processes such as determining what to charge customers for shipments. As a result, the company’s pricing department has needed fewer people.

The use of generative AI and related technologies is also changing some jobs at UPS “by reducing repetitive tasks and physical stress,” UPS spokesperson Glenn Zaccara said.

As AI adoption grows, it is likely to reconfigure management hierarchies, the Oliver Wyman study projects. Entry-level workers are likely to bear the initial brunt as more of their duties are automated away. In turn, future entry-level work will look more like first-level management roles.

The cascading effect could flatten layers of middle management, the staging ground for senior leadership roles, according to the analysis.

More than half of senior white-collar managers surveyed in the study said they thought their jobs could be automated by generative AI, compared with 43% of middle managers and 38% of first-line managers.

Still, business leaders across the economy say they expect the new technology will augment and elevate some white-collar roles, giving employees and managers the means to do more meaningful work—both for their companies and in their careers.

At Prosus, a global technology-investment group based in the Netherlands, executives say that is already happening as AI automates more of its workforce’s tasks.

“Engineers, software developers and so on can do the work twice as fast,” said Euro Beinat, Prosus’s global head of AI and data science. “One of the side effects is that a lot of these employees can do more and do slightly different things than we were doing before.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

20 thoughts on “AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.”

  1. There’s knowledge work, and there’s people work. Very few get to senior management with only knowledge work (and in those companies where that happens, it ain’t a pretty sight).

    How will entry level people learn how to build and run teams, motivate personnel, advance their big-picture skills? Can AI fill the role that business mentors used to have? Will they recognize personnel disasters in the making? Will anyone be able to debate stupid decisions? The first generation of hyper AI adaption may result in eating the seed corn for later advances — if everyone is a tool user, who are the tool builders and how did they get that way?

    I know I sound like a Luddite, but we’ve had two+ generations of people scrambling to master tech tools in the ordinary workspace. That’s inevitable, and it undoubtedly advanced the capabilities of what we can accomplish. Knowledge tools will continue to be useful, and there’s no point in resisting. But… if we can’t figure out how to get people into a workspace and then learn the means and methods of how a business runs and makes healthy decisions, that management incompetence will create its own demise.

    Learning how to use spreadsheets didn’t generate a bunch of people who understood what to use them for, any more than stenography did — garbage in/garbage out still applied. That will still be true of massive AI use. As a technical tool, it has many useful applications, some of which are new, and there will be personnel shifts/challenges as a result, but too many people have gotten used to working at home to quite understand what can actually be accomplished in teams that become more effective and expert over time. Tool-users are not Leaders, and Cargo Cults are not effective defenses against obsolescence.

    I find all of these “we won’t need white collar entry-level types any more” predictions silly. It’s true that the entry-level demographics are not looking good, but I see that as more of an opportunity for the best of them to get on the track and become expert in learning how to use the tools for the best-informed real world purposes. AI can’t teach you how to run a business without learning how people work, or what to use it for.

    • As a partial counter, too many get to senior management with only people-work “skillsets” (and it’s all too often only a subset thereof).

      Exhibit A: Boeing. Every single widespread safety problem at Boeing/with Boeing products has occurred since the CEO’s office became the domain of MBAs and not engineers. (And in particular, it has occurred after MBA takeover of the board of directors several years previously, which led into the misguided acquisition of McDonnell-Douglas justified on its face by the “Airbus threat” that would have been largely illusory had anyone on either side of the pond actually been enforcing even the moronic, self-interested antitrust policies and standards of the Reagan administration.)

      Exhibit B: much closer to home, commercial publishing. There’s a glass ceiling even more rigid than any gender/ethnicity/racial/class barrier — and it’s a glass ceiling built on the delusion that sales-and-marketing support is not just necessary, but sufficient. (Exhibit B-sub-1: The abysmal substantive and not-much-better writing quality of several “political memoirs” of 1990s political officeholders. Go ahead, name one that you actually read all the way to the end unless required to for the “day job,” regardless of its preconceived political fit with your own notions. Exhibit B-sub-2: The complete lack of staying power after the first royalty-reporting period of celebrity-“authored” children’s books.)

      Bluntly, top management requires both. I see no pathway for currently-conceived “AI” to acquire either, and particularly not to acquire either in a manner applicable outside of its existing experience. The inability of any current AI/enhanced-Eliza system — and both I and academically-tunured colleague have tested this with stuff that’s not in public release — to answer queries like “Including citations to the relevant and controlling legal authority, identify the factors a court of nationwide jurisdiction must consider when determining standing to challenge an administrative provision regulating what identifying information must appear on an application for a government benefit or declaration” with anything close to coherence is an example, if only because we don’t know enough about “good judgment” yet to set up systems that require it. I think I hear WOPR calling…

      • Oh, I quite agree. It’s a race to the bottom from “higher ups” without a strategic/reality bone in their bodies to “lower downs” who can’t even manage to talk to each other. Both are missing a major skillset (and apparently unable to repair the deficit, most of the time).

      • INTEL. They used to lap the field while engineers were in charge. They understood the tech *and* the competition because they were run by engineers too. The board turned control over to “managers” who decided it was more co$t effective to coast on their lead for a while. Bye-bye lead. Ten years later, they turned management back to the techies, who *hope* to catch up this year and leapfrog the competition by ’27.

        https://zeihan.us11.list-manage.com/track/click?u=de2bc41f8324e6955ef65e0c9&id=044267fb8c&e=09a22692a8

        The media keeps pretending that “AI” will replace humans, just as they used to pretend that “management is management” and still pretend “teaching is teaching”.
        No.
        You need to master the discipline and understand its logic to be able to do the job correctly. And that applies everywhere, from tech to farming. Give the greatest tool to an idiot and you still have an idiot on the job. Give that same tool to a trained employee, though, and he’ll be as effective as three idiots. Or more.

        The only exception is modern politics where any village idiot can (and does) play.

        • Most village idiots would be insulted by being compared to modern politicians. And rightly so… for definitions of “modern” going back to 08 August 1974. (If you don’t immediately know the significance of that date without looking it up, you haven’t been paying enough attention to the foundation of the greatest duty and privilege of ‘murikan citizenship: Voting.)

          That said, the remaining village idiots wouldn’t understand the comparison. Or distinction.

      • A certain degree of non engineering talent at the top may be needed though. Consider the fate of Rolls Royce in 1971: the engineers running the company had too much power and seem to have kept everyone in the dark about the huge technical and financial problems the company was facing (and quite possibly never explained the risk being taken on with the RB211 contract).

        Still, there is no reason why an engineer cannot learn finance so as to avoid such foolishness (and they are probably more suited to such study than the average MBA). Giving the moneymen total charge of a technical based company is a recipe for disaster, though the clever operators will have extracted a shed load of money before offloading the company onto some gullible fool whilst stiffing the banks who financed everything.

        • Mike, that’s not quite fair to Rolls Royce: Almost no degree of either technical competence or (traditionally considered) managerial competence can ordinarily be expected to overcome competitors’ fraud, antitrust, and/or “domestic manufacture by law!” efforts. If there was a “mistake” made with RB211, it was in tying it too closely to a single airframe that was, umm, a camel.† With mange and a really foul temper for a camel.

          † The L1011 was, from the maintenance perspective of everything except the engines, a piece of garbage. It also tried to have lots of capability at lots of different things but ended up with insufficient capability at anything to be acquired for that purpose. Its failure is the reason that Lockheed sank beneath the waves, because early perceptions of its problems reduced confidence in other proposed Lockheed products, leading to a multiyear period of no new US government contracts.

          This is, frankly, the biggest problem with aircraft engines since the 1960s: Too-close customization to airframe, leading to yuuuuuuuuuuuuuuuuuuuge logistical challenges down the road and lesser performance. And it’s all been imposed by non-engineering, non-managerial considerations… As a former commander of aircraft maintenance squadrons who experienced this with six different airframes, it’s a sore point.

          • I agree with pretty much everything you say but, from a UK perspective, there is enough blame for the RB211 debacle to go round, and bankrupting the company was really the responsibility of the UK top management in general, and in particular of the engineers who advised the board on the contract.

            I would call it engineering hubris: the engineers badly underestimated the time and costs of the engine revolution they were aiming for and thinking that they could do this on a fixed price contract was risking the whole future of the company (with no real plan B when this wager was lost). In particular, gambling on a technological breakthrough in the use of carbon fibre for the engine’s blades was their “bridge too far” moment.

            Objectively, the development team’s performance on the project undertaken was remarkable. They actually did produce a revolutionary world leading engine, but it was just to late – and too expensive to develop – for their then shareholders.

        • Don’t forget engineering training is heavy on practicality and economics. Just because something is technically doable is no guarantee it is economically viable and understanding the difference is part of the job.

          (That was our primary lesson in final semester PLANT DESIGN AND ECONOMICS. Prof gave us a reference and a goal and pointed us at the library, expecting a full plant design and a go/no go recommendation at the end of the semester. It was technically viable and profitable…but Roi was less so than putting the required investment into T-bills. Recommendation: no go. The prof also spent a good chunk of the semester drilling technical writing into us. Most useful course of the lot.)

          Competent engineers know that you can’t hide facts too long, if at all.

          In the Intel case, the MBAs decided that it was too expensive to migrate manufacturing to leading edge tech and that their design chops were enough to stay on top even if they surrendered the manufacturing edge. They were warned otherwise and it proved not to be the case so they ended up lagging in manufacturing and tied in design. Fact is, design is cheaper to innovate than manufacturing. Quality control is HARD!!!

          A point to remember as globalization gives way to regionalization. Hint: keep an eye on Apple. They waited too long to loosen their ties to China.

  2. Well, I can name one set of white-collar jobs that AI is not going to threaten soon, and probably not ever:

    The editorial board of the WSJ — if only because someone will have to review what comes back for ideological acceptability† and to ensure that the paper’s ownership isn’t criticized in even the most roundabout fashion.

    † I’m picking on the WSJ here because it’s so obstinate, but the difference from NYT and WaPo and every other self-appointed news gatekeeper is one of (distressingly small) degree, not of kind.

  3. Still laughing: “It has the power to create content and synthesize ideas…”

    It does no such thing. It can’t even change its own batteries.

    Humans: do you have ANY idea how hard technicians and engineers work to keep those single-purpose robots running? Sure, they’re good at putting bolts in faster than a human – as long as they don’t drop the tools, break down, require a reset, or have to modify the job because this year’s bolts are longer.

    Have you ever had your computer just suddenly give you the black screen of death?

    Now multiply by very large numbers, and you’ll know why self-driving cars in San Francisco run over things.

    And their literary output is pathetic. And derivative (how could it be anything else) and grotesque.

    • AI software has many truly productive uses but not where the media is looking.
      A truly amazing use is Inverse Design:

      https://www.msn.com/en-us/news/technology/new-ai-tool-discovers-realistic-metamaterials-with-unusual-properties/ar-BB1i3cJr?ocid=nl_article_link&cvid=d3d282ad0bfe42edc7bbe21d46557992&ei=5

      “A coating that can hide objects in plain sight, or an implant that behaves exactly like bone tissue—these extraordinary objects are already made from “metamaterials.” Researchers from TU Delft have now developed an AI tool that not only can discover such extraordinary materials but also makes them fabrication-ready and durable. This makes it possible to create devices with unprecedented functionalities.

      “The properties of normal materials, such as stiffness and flexibility, are determined by the molecular composition of the material, but the properties of metamaterials are determined by the geometry of the structure from which they are built. Researchers design these structures digitally and then have it 3D-printed. The resulting metamaterials can exhibit unnatural and extreme properties. Researchers have, for instance, designed metamaterials that, despite being solid, behave like a fluid.

      “Traditionally, designers use the materials available to them to design a new device or a machine. The problem with that is that the range of available material properties is limited. Some properties that we would like to have just don’t exist in nature. Our approach is: tell us what you want to have as properties and we engineer an appropriate material with those properties. What you will then get is not really a material but something in-between a structure and a material, a metamaterial,” says Professor Amir Zadpoor of the Department of Biomechanical Engineering.”

      It sounds like SF but it is very real. Metamaterials are arrangements of mundane molecules arranged in clusters that behave as “super atoms” and thus have unusual properties. Traditionally, the materials science researchers make a mix and then figure out how it behaves. No more.

      AI software works in reverse: you tell it what properties you want and it figures out the structure that delivers it. Mass producing it is the new challenge but big bucks lie that way. Way more than all the LLM text ever, combined.

      And that is just one of the advances the media never sees, much less angsts over.

      Wonders are coming but not where the politicians or activists are looking. Ignore them. Just wait a bit and see what comes next.

    • And their literary output is pathetic. And derivative (how could it be anything else) and grotesque.

      Autocrit added an AI function, which I found out when I clicked the button to find out “What does this do?” Apparently it generates ideas based on what you wrote so far. And I ended up rolling my eyes, because the ideas were so cliched and derivative, and not even close to where I actually went with the story. I am not worried a machine will compete with my imagination.

      I have not experienced grotesque “AI” literature, though. I am easily squicked, so I avoid the generated art where the machine has no idea how many hands or fingers humans have. I didn’t even like the photo in my ninth grade textbook, of a hand with six fingers (polydactyly, I still remember). But it was a human writer who drove me to abandon a writing group once, based on the grotesque characters in his story. He seemed to have a weird hatred for the disabled, whether the disability was mental or physical. *Shudder* to think of what stories he would use to train the machines.

      • I love AutoCrit – but ONLY to count.

        It counts everything and anything – so I don’t have to. I like especially that it also counts how many times you’ve used a two to four word phrase, the latter of which you really don’t want to be doing except by design.

        All its other features I’ve always stayed away from – I distrust ANYONE other than me working on my stories, and letting AI means you’ve let untold millions in.

        I am disabled. We’re just like everyone else in any of the ways that count, except we’re often not allowed to want what we want. It can be warping – but so can beauty.

        And, if you think about it, humans are much more likely to become disabled (the rate is around 1 in 5) than to become suddenly beautiful (which has to be worked hard at, and is often mostly illusion anyway).

        It underpins my fiction.

        • And, if you think about it, humans are much more likely to become disabled (the rate is around 1 in 5) than to become suddenly beautiful (which has to be worked hard at, and is often mostly illusion anyway).

          Good point. I am told I have arthritis in my knees, after a fall several years ago. Apparently I shouldn’t have shrugged that one off. As for Autocrit, its selling point was that it alerted me to my tendency to start sentences with character names / pronouns. The count on that was eye-opening, and an immensely useful revelation. The rest I can take or leave.

          • That kind of analysis is the area where I expect local LLMs to take hold, either as standalone software or Word Processor feature.
            Low hanging fruit.

  4. I see a lot of these pearl clutching reports and not a one addresses demographics.

    Boomers are leaving the work force, X’ers and Millenials are already there moving up to fill the holes, but leaving holes in their wake. The smallest extant generation is going to backfill the largest ever? (Never mind Zoomers attitude issues; there aren’t enough of them nor are they training to fit. There’s only so much demand for xxx studies entry level jobs.)

    No mention of the millions of open positions waiting for skilled candidates, either.
    Companies are latching onto “AI” because what else is there?

    https://www.uschamber.com/workforce/understanding-americas-labor-shortage#:~:text=Right%20now%2C%20the%20latest%20data%20shows%20that%20we,the%20U.S.%2C%20but%20only%206.5%20million%20unemployed%20workers.

    The mismatch between what the labor market needs and what the educational system is churning out doesn’t help.

    Now, against all this, the media gripes over 4500 firings? 12000?
    Easier to whine about AI, I suppose, than to look at the system more concerned with what bathroom the kids fit in than the jobs that might feed them for decades.

Comments are closed.