Should You Be Thinking About AI-Proofing Your Career?

This content has been archived. It may no longer be accurate or relevant.

From ReadWrite:

We’re increasingly relying on artificial intelligence (AI) to automate elements of our daily lives, from issuing reminders to follow-up with important work tasks to regulating the temperature of our homes. Already, automation has started to take over jobs in the manufacturing sector, and with the explosion of AI on the near horizon, millions of people are worried their jobs, too, could be taken by a sufficiently sophisticated machine (or algorithm).

AI has innumerable benefits, namely saving time and increasing reliability and safety, but it also comes with downsides. AI could introduce new security vulnerabilities into otherwise secure systems, and realistically could replace or displace millions of white-collar jobs once thought irreplaceable.

. . . .

Let’s start with a high-level assessment of the future of AI. AI is going to continue to advance, at rates that continue accelerating well into the future. In 2040, we may look back on the AI available today the same way our ubiquitous-internet-enjoying culture looks back on the internet of 1999.

Essentially, it’s conceivable that one day, far into the future, automation and AI will be capable of handling nearly any human responsibility. It’s more a question of when, not if, the AI takeover will be complete. Fortunately, by then, AI will be so embedded and so phenomenally powerful, our access to resources will be practically infinite and finding work may not be much of a problem.

But setting aside those sci-fi visions, it’s realistically safe to assume that AI will soon start bridging the gap between blue-collar and white-collar jobs. Already, automated algorithms are starting to handle responsibilities in journalism, pharmaceuticals, human resources, and law—areas once thought untouchable by AI.

. . . .

That said, AI isn’t a perfect tool. AI and automation are much better than humans at executing rapid-fire, predictable functions, but there are some key areas in which AI tends to struggle, including:

  • Abstract brainstorming and problem solving. If the problem is straightforward and puzzle-like, AI can likely solve it much better than a human. But humans are equipped with far superior abstract thinking capabilities. We, for lack of a better term, “think outside the box,” and can apply novel ideas to various situations. Accordingly, for the foreseeable future, we’ll likely remain better artists and more innovative problem solvers.
  • Human interactions. While there are some teams working on developing AI assistants (and even therapists) that can replicate basic human interactions, the fact remains people prefer to engage with other people, at least in certain industries. For example, when you’re buying a new home, you’ll want to have a real conversation with a qualified real estate agent, and when you’re struggling with a mental health issue, you’ll want to speak with a human being sitting across from you. Accordingly, jobs that are heavily dependent on human interaction will likely be protected for some time.
  • Situations with many (or unpredictable) variables. AI performs best in situations with firm, unbreakable rules, and the fewer rules there are, the better. When you get into situations with an increasing number of variables, or when those variables become unpredictable, AI begins to struggle. Accordingly, the higher up the management chain you go, the less likely it is that AI will be capable of handling the responsibilities.

Link to the rest at ReadWrite

PG will note that there are many AI vendors selling products for use in the legal world and more seem to appear with each new issue of various periodical publications for lawyers.

20 thoughts on “Should You Be Thinking About AI-Proofing Your Career?”

  1. “AI is going to continue to advance, at rates that continue accelerating well into the future.”

    It is not clear to me that this is true. There is a long history of AI research. The usual pattern is there is a breakthrough with a new technique. It produces impressive, and possibly even useful, results in some areas. People proclaim that we are on the verge of general artificial intelligence. Then the limits of this new technique are reached. Repeat. There are signs that the current generation of AI research is hitting that limit.

  2. Contemporary AI is reaching its first inflexion point, the time when hype has run wild so long not even promoters buy the narrative anymore.

    It is entering the “it’s a failure” phase where the media and pundits find something else to flog.

    The third phase is when after years of quite activity, people wake up and “suddenly” discover the stuff is all over. Neither panacea nor extinction threat. Just another tool.

    Like PCs in the 80’s and the Internet in the 90’s. Once miraculous or feared, not just routine and mundane.

    Bill Gates once famously said that people tend to overestimate new technologies in the short term and vastly underestimate them in the long term.

    AI as it now exists is just another version of the same automation trends we’ve been living with since the days of Ned Ludd and before. Just in the software realm, instead of the rubber and steel realm.

    True AI, btw, is nowhere near the horizon.
    But the hype has gotten so out of hand the people who do understand it are standing up to blow away the smoke.

    Here’s a recent release focusing on the limitations of what passes for AI.

    https://www.amazon.com/Rebooting-AI-Building-Artificial-Intelligence-ebook/dp/B07MYLGQLB/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=

    It’s not a must read by any stretch but if you want to see what AI might be a couple decades away…
    There’s dozens more. Pick any one. Or none.

    The media, as always, is way off in left field.
    The real challenge with AI isn’t Skynet, it’s the more mundane quality control issues of all software. Lockups and crashes. And, of course, user error. That last one will never go away.

    • “Lockups and crashes.”

      And the lack of any common sense when it is given anything not already in its database …

      Troubleshooting scripts make a good example. Dell started using them in their server support groups in 2000 because they couldn’t hire enough techs that actually knew/could be trained in hardware to help their customers. And scripts are great for making sure you’ve checked each step, not so good when the script doesn’t cover the actual problem/thing being done. (I got my boss’ mad at me because when I got a call showing that this was the third time the poor customer had called in for the same issue, my first line of documentation said: “Previous scripted troubleshooting failed to fix issue, manual troubleshooting from this point on.” What annoyed them even more was the customer not having to call back after I was done with them! 😉 )

      Scripts could be considered early ‘AI’, they were to take human thoughts/errors out of getting the work done. We have control systems now that can do all the steps in flying a plane from one point to another, yet we still need pilots to cover when the autopilot gets confused.

      Writers/editors/artists have nothing to fear, there are so many ‘rules’ doing each the poor AI would lock up trying to get through all those conflicting rules!

    • True AI, btw, is nowhere near the horizon.

      True AI is right in front of us. It uses new ways to solve problems that previously needed human intelligence.

      We need a new word for what the machines do.

      • Yes we do, because what is being hyped isn’t actual intelligence. But as long as *they* burden themselves with the *expectations* of the true AI field, they’ll be judged by what they can’t do instead of what they can.

        If they were honest enough to admit they are merely visualizing habit and muscle memory they would get less breathful hype but a lot less pushback.

        Need a new term? It exists: virtual intelligence. It can virtually pass for intelligence, if you don’t look too closely.

        • A lot of it is just computational speed. Map out a zillion chess moves, and narrow it down to the best. That’s different from what Kasparov does.

          “Intelligence” implies it does what humans do. It doesn’t. It gets the same results with a different method.

          I don’t discount the method at all. It has a great future, and has the potential to be indistinguishable to an observer. But, we have now met something different, and in
          dealing with a potential competitor, it’s good to have an accurate picture of what it is, it’s strengths, our strengths, and the weaknesses of both.

          • I suppose we might also ask, just what does Kasporov do? We talk about intelligence all the time. How does it work in humans? Anyone have the manual?

            We can trace the actions of Deep Blue, but can we trace Kasporov’s?

            • Some chess players (reportedly) don’t plan sequences of moves but rather work by sculpting the chessboard state.

              Instead of worrying about what the opponent does or reacting to specific moves they worry about achieving a familiar/desired layout.

              It’s a balance of power thing where they limit the opponent’s potential moves and force them to make a suboptimal move. It doesn’t much matter which specific move because by then the game if effectively over.

              • Sure. But how does a player sculpt? That’s what we call intelligence. We don’t have a clue what’s happening with a human.

                So, we are comparing a machine to a process we don’t understand, and claiming it is or is not the same as what we don’t understand.

                • I’d say we hope it isn’t like buggy software on digital chips. We don’t know what it is, and we don’t know why people are buggy.

                  It’s a Bandar Log situation with intelligence. “We are great. We are free. We are wonderful. We are the most wonderful people in all the jungle! We all say so, and so it must be true,”

  3. No. Didn’t I see a story a little bit ago, about AI programmers trying to get AI to recognize shapes, even when the shapes are in different contexts? They figured out that humans see the “skeleton” of a tree or a dog or whatever, and know right away what we’re looking at, regardless of the angle we see it from.

    Computers don’t do that, and programmers have to figure out that there’s too much they don’t know about the human brain to create a true artificial intelligence. They’re in good company, though: a lot of education fads are rooted in a poor-to-no understanding of how humans learn.

    So I’m not worried about the Geth. A virtual intelligence like Avina might be more attainable, which is basically Alexa or Siri with a holographic avatar to interact with. In fiction, I’d have a character customize the avatar so that it looked much cooler.

  4. Felix beat me to on the BG quote on over and underestimating new technologies. AI is certainly on, or approaching, the peak of a hype cycle, but I expect it to continue to play a larger and larger role in the economy and life.

    Consider the example of cloud computing: it was hot about 10 years ago, now it’s old news, but cloud deployments dominate the industry and are major profit centers for the biggest tech companies with no end in sight. Current AI would not be possible without the compute power that cloud has made available.

    Innovation in AI is just getting going seriously. Machine learning, for example, has spawned at least a dozen promising variant learning strategies for different use cases. Machine learning computers have devised winning strategies of the game of Go that human experts had never considered. Computers out-psyche humans at high-stakes poker because computers never flinch from audacious and unpredictable risks when the odds are right.

    The number of human neurons focused on improving AI staggers me. Combining cloud with AI, you can experiment with AI in ways that I couldn’t imagine even a decade ago at prices my high-school student grandson can pay for with odd-job money. Don’t underestimate the power of a million bright minds free to explore and invent.

    On the other hand, the more we know, the more we don’t know.

    Self-driving cars, for example, seem to be seriously stalled. The way I see it now, the easy 80% is down, but the remaining 20% is a killer.

    Computers aren’t much good at dealing with black swan outlier behavior, like odd shaped wide loads, wildly irrational drivers, and the myriad of things that require wide experience and an imagination to foresee.

    A human could supply the experience and imagination, but, at present, self-driving cars don’t work very well with human drivers at the wheel. I read stories about cars stopped in the middle of rush hour with the driver passed out intoxicated or simply asleep, miles from where they were last conscious. The systems have safeguards, but they are not good enough. And we keep hearing about fatal accidents in unforeseen situations.

    I predict the emphasis will shift from replacing human drivers to systems that make human drivers super-drivers, perhaps one human driving whole fleets simultaneously. 5G potentially makes that possible, but we have to go a long way in refining human-AI interaction before it will reach its potential.

    My crystal ball says that the next phase of AI will be in developing new ways for humans and AI to work together– combine the human secret sauce with the infallible memory and blazing speed of compute to make better decisions faster. (I’m not talking about Musk’s direct brain to computer interaction– that’s just the wire, determining what the signals should be is more important. And difficult.)

    Nevertheless, a lot of jobs will disappear. Robots will replace repetitive manual labor in many situations. One person with AI help will replace many. I’m glad my paralegal daughter is now in law school. I predict that the number of lawyers and paralegals will be decreasing, but the number of paralegals will decrease faster, and lawyers who are not prepared to take advantage of AI had best make plans to hire colleagues who are.

    My advice to young people today is to become polymaths. Study classical Chinese, Greek and Latin, quantum mechanics, anthropology, the most sophisticate mathematics you can bear, apprentice in a manual trade or two, and learn to code. Read a lot. The specialists will be machines. The generalists will run them.

  5. So, basically, sales would be one of those jobs that is unlikely to be replaced:
    – Lots of human interaction
    – Much non-linear thinking
    – Sales, or not, often depend on unspoken, deeply felt intangibles that are difficult to quantify

  6. All the “AI will take everybody’s job” is hype.
    Just like the idea that robots will.
    All is does is display a lack of understanding of just how complex a modern economy, compared to the 19th century of earlier.

    Here, take a look at the Labor Dept database of job categories:

    https://www.bls.gov/oes/current/oes_nat.htm#00-0000

    Now, imagine having to craft a software replacement for each and eveey one. Or, simply start by figuring out the ones that *can* be automated. Used car salesman. Think any AI can achieve Kurt Russell sleaziness? 😉

    Or, seriously, even if it could be achieved, would the return justify the investment? What if the business is inherently fluid and changes faster than the software can be optimized? Some jobs are marginal as is. They might be gone on their own long before proper software can replace them.

    Like PCs and robots, the point of software, whatever you call it, is to improve human productivity. Not replace them. If nothing else, because humans are cheaper to retrain and more predictable. Slightly less chance to go all TAY on you. 😉

Comments are closed.