90% of My Skills Are Now Worth $0

From Software Design: Tidy First?:

I wanted to expand on this a bit.

First, I do not have the answer for which skills are in the 90% & which are in the 10%. (I’ll tell you why I concluded that split in a second.) We are back in Explore territory in 3X: Explore/Expand/Extract terms. The only way to find out is to try a little bit of a lot of ideas.

Second, why did I conclude that 90% of my skills had become (economically) worthless? I’m extrapolating wildly from a couple of experiences, which is what I do.

A group of us did a word-smithing exercise yesterday. I took a sentence that was semantically correct & transformed it into something punchy & grabby. I did it through a series of transformations. Someone said, “What this means is XYZ,” and I said, “Just write that.” Then I replaced a weak verb with a stronger one—”would like to” became “crave”.

Having just tried ChatGPT, I realized ChatGPT could have punched up the same sentence just as well & probably more quickly. Anyone who knows to ask (and there is a hint about the remaining 10%) could get the same results.

I’ve spent between 1-2% of my seconds on the planet putting words in a row. For a programmer I’m pretty good at it. The differential value of being better at putting words in a row just dropped to nothing. Anyone can now put words in a row pretty much as well as I can.

I can list skills that have proven valuable to me & clients in the past. Many of them are now replicable to large degree. Others, like baking, not so much (but also rarely valuable in a consulting context).

Third, technological revolutions proceed by:

  1. Radically reducing the cost of something that used to be expensive.
  2. Discovering what is valuable about what has suddenly become cheap.

ChatGPT wrote a rap in the style of Biggie Smalls (RIP) about the Test Desiderata. It wasn’t a great rap, so I’ll spare you the details, but it was a rap. I would never have dreamed of writing one myself. Now the space of things I might do next expanded by 1000. (The Woody Guthrie-style folk song on the same subject was just lame.)

Fourth, to everyone say, “Yeah, but ChatGPT isn’t very good,” I would remind you that technological revolutions aren’t about absolute values but rather growth rates. If I’m big & you’re small & you’re growing faster, then it’s a matter of time before you surpass me.

My skills continue to improve, but ChatGPT’s are improving faster. It’s a matter of time.

What’s next? Try out everything I can think to try out. I’ve already trained a model on my art. I’ll try various tasks with assistance & see what sticks.

. . . .

As someone who has spent decades in the software development industry, I’ve seen my fair share of new technologies and trends come and go. And yet, when I first heard about ChatGPT, I was reluctant to try it out. It’s not that I’m opposed to new technologies or tools, but rather that I was skeptical of how AI language models could truly benefit my work as a software developer.

However, after finally giving ChatGPT a chance, I can say that I now understand why I was reluctant to try it in the first place. The truth is, AI technology like ChatGPT has the power to drastically shift the value of our skills as developers.

In my experience, software development requires a wide range of skills, from problem-solving and critical thinking to programming and debugging. For years, I’ve relied on my expertise in these areas to deliver high-quality software products to my clients.

But with the rise of AI technology, I’m now seeing a shift in the value of these skills. The reality is that many aspects of software development, such as code completion and even bug fixing, can now be automated or augmented by AI tools like ChatGPT. This means that the value of 90% of my skills has dropped to $0.

At first, this realization was disheartening. I had built my career on a set of skills that were now being rendered obsolete by AI. However, upon further reflection, I came to see this shift in value as an opportunity to recalibrate my skills and leverage the remaining 10% in a new way.

Rather than seeing the rise of AI as a threat to my career, I now view it as an opportunity to augment my skills and deliver even greater value to my clients. By embracing AI tools like ChatGPT, I can automate routine tasks and focus my efforts on the areas where my expertise and creativity can truly shine.

For example, ChatGPT can be incredibly useful for brainstorming new solutions to complex problems. As a developer, I often encounter challenges that require me to think outside the box and come up with creative solutions. With ChatGPT, I can input a prompt and get dozens of unique responses that can help me break through creative blocks and deliver more innovative solutions.

Similarly, ChatGPT can be used to analyze and understand complex code bases. As a developer, I’m often tasked with reviewing and debugging large code bases. With ChatGPT, I can input a query and get relevant information in seconds, helping me to quickly understand and navigate even the most complex code.

But perhaps the most exciting opportunity presented by AI technology is the ability to collaborate with other developers and share knowledge at a faster rate than ever before. With ChatGPT, developers can input questions or prompts and receive relevant responses from other developers around the world in real-time. This means that we can tap into the collective knowledge of our industry and deliver even greater value to our clients.

In conclusion, while I was initially reluctant to try ChatGPT, I now see the value that AI technology can bring to the world of software development. While the value of some of our skills may be decreasing, the opportunity to leverage the remaining 10% in new and innovative ways is tremendous. By embracing AI tools like ChatGPT, we can work smarter, not harder, and deliver even greater value to our clients and our industry as a whole.

Link to the rest at Software Design: Tidy First?

9 thoughts on “90% of My Skills Are Now Worth $0”

  1. “By Jove he’s got it!”

    For a change, somebody clearly and concisely shows he understands the LLM tech: what it is, what it isn’t, what it can do, what it can’t, and most importantly, what it portends and how to deal with it.

    “Stringing words together” to communicate ideas is valuable but only if the ideas communicated are valuable in the first place. And the software does not generate those ideas. And in programing (or writing) arranging ideas in the proper way is a lot of grunt work, necessary but often tedious. LLM is a tool that can rapidly and easily replace a lot of the grunt work but only if you understand its limits and its strengths.

    Above all: like the Internet, this tech is here to stay. It is just starting its evolution into a range of tools optimized for different uses. The proper reaction at this time is to keep an eye on it, watch its spread, it’s applications, and look for ways it can help *you*. Like other tools before, its proper use is to amplify and leverage human skills to make the job easier and more productive.

    No amount of whining or high falutin pontificating is putting the djinn back in its bottle (think ebooks, 12 years ago). The thing to do is make it follow your wishes. It will be only too happy to comply.

    “AI won’t take your job, but a human using AI might.”
    Be the latter.

  2. To a Master, ChatGPT may be a useful tool. To the Sorcerer’s Apprentice, ChatGPT is a disaster in waiting.

    You actually have to be at the height of your craft to be able to use ChapGPT. To have that 90% of knowledge to make it work. Without that knowledge, you get nothing useful from ChatGPT.

    It like the old Story. A Master violinist comes to visit a family. The young kid, just starting to learn to play, has a cheap violin, and complains about how hard it is to play well with such a cheap violin.

    The Master takes the cheap violin and plays an exquisite piece of music.

    “It’s not the cheap violin that is the problem, it’s the player.”

    Put another way:

    In Tunnels & Trolls(T&T) — which is a very useful Game system to understand — Wizard staffs learn from the user. A Wizard that has pushed the limits of magic, impresses that knowledge onto the staff. Hand that staff to a newbie and the question occurs:

    – is the newbie using the staff, or is the staff using the newbie.

    The article is useful for my WIP by pointing out the obvious.

    Thanks…

    • Wait a bit.
      Remember ChatGPT isn’t the ultimate expression of the tech.
      It is just the first barely-usable version. The equivalent of 1977 PCs.
      (Switch panel boot processes, paper tape storage–not even cassette, much less floppy.)

      Judging LLM software by ChatGPT is like judging commercial air transport by the Wrightep flyer.
      First of all, LLM tech is going to be commercialized primarily as *focused* features embedded in existing product categories.

      Like this:
      https://www.adobe.com/sensei/generative-ai/firefly.html

      “Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.”

      See a prompt anywhere?

      Try this: spell checkers. Current spell checkers work word by word, making sure each word is properly spelled. In isolation. Now imagine one that check the spelling of each word within the context of the sentence to ensure you’re using the word right. Better yet, it might offer an *optional* list of alternate words or expresions that better fit the tone of the document.

      Will there be fools that try to use the new software the way they *think* it works instead of the way it really works? Sure, but that’s on them, not the software. Operator error will always be with us because fools are more innovative than an army of human factors engineers.

      “Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.”

      ― Rick Cook, The Wizardry Compiled

      (Fun series, BTW: THE WIZ BIZ. Starts with WIZARDS BANE.)

      Put it this way: you don’t need to be a mechanic to drive a car or know C++ to use computer software.

      Me, I have three early uses for this tech but I have no intention of wasting time learning the soon to be irrelevant art of Prompt engineering. I can afford to wait three months for the embedded-model software.

      • Just looked at the Rick Cook books again on Amazon, and I bought them December 2017, when you first recommended them. Yikes!

        – How have over five years passed?

        The problem I still see, is the only way that what you are saying works, is by being connected to the internet on a fast connection, with your document on that remote system.

        I’m on DSL with only 1mps, with no hope that the local DSL will upgrade soon.

        In my WIP, I see that as a “feature” not as a “flaw” so that I’m not part of the “punctuated singularity” that will result when the “autocorrect” function takes over.

        I see that the remote system making all of those suggestions will grow with every user, every interaction. At some point it becomes a threat as it is taken over by one of the “cyberspace” gods, a la, William Gibson, that still watch us humans play on the surface of the Real.

        I already have a “cyberspace” god encouraging TikTok users to riot in cities, just for the LULZ. So many people easily controlled by their smartphones, and they are unaware of who they are giving their attention too.

        BTW, That explains much of what has been happening the past few years. This is pure Umberto Eco’s Foucault’s Pendulum.

        But I digress.

        • Yes, ChatGPT needs Azure.
          No, all models don’t need the cloud.
          I already pointed out a while back that it is possible to run Stable Diffusion local on a PC. (Search for “toms hardware video cards ai”.)

          You remember how PCs were in the early days? A CPU only. Then IBM brought in math co-processors. Then the math chip got blended into the CPU. Video cards were simple raster image generators. Then they evolved GPUs (graphics pricesor units, with major parallel processing capabilities and, in the case of NVIDIA cards, Tensor computing units) and in the case of the dedicated “AI” chips, dedicated machine language features. (Tesla, Amazon, Google, and MS all have their own designs).

          https://www.extremetech.com/computing/microsoft-building-codename-athena-ai-chip-on-tsmcs-5nm-node

          For now NVIDIA rules with their “AI” cards (which are now forbidden to go to China). INTEL, AMD, ARM, *everybody* is working to create single chip, consumer grade ML coprocessors. In fact, the XBOX Series X ($499) and the Series S ($249-299) both run a single integrated chip that includes an ML processing Unit among its CPU and GPU. This is Microsoft planning ahead to help future games run local LLM models inside their games. The kicker: Windows 12 (coming in ’24-25) will support “AI” coprocessors. Rumors say it will *require* them. That is a bit doubtful.

          When I say the tech is here to stay, this is why.
          The industry wouldn’t be investing hundreds of millions on new hardware designs to boost performance and reduce costs if they weren’t convinced of the value of Large Language Models.

          Now, this doesn’t mean all models will run offline: some *need* access to massive datasets (100 trillion elements for GPT4) but that is unavoidable. And hardly show-stopping. After all, to use the internet for shopping, communication, and gaming you need internet access just like you need electricity. Online access is just another utility. And quality broadband is evolving (via satellite constellations) into a global service. STARLINK is already operational and Kuiper is due by ’25.

          As to “Cybergods” there is a (joke?) model allegedly running 24×7 with the mandate to end mankind. It is called ChaosGPT and last week the clueless media was already “seeing with alarm”. Big deal.

          As the OP point out, LLMs are useful and powerful but only at stringing words together. The only danger is what idiots do with those words. Online “cybergods” are no different than the cartoon wackos hoding signs that “the end is nigh”. Just like paper, online tolerates whatever humans put on it. Doesn’t give it any magic powers.

          Don’t blame the tech.
          Blame the humans.
          Or the education system that prefers indoctrination to critical thinking.

        • BTW, all the major online LLM aps let you download your files. They want you to as it costs them money to hold them. Which is why most prompt driven output isn’t reprductible. The same prompt returns different results every time its invoked. So you need to downlad everything as you iterate.
          Note that all the “AI ART” you hear about is modified by humans after the model generates it. The final product is what the human says it must be.
          Unless the editing app is online, that too requires downloading.

          As for being stuck with DSL (brrr!) you might want to look at the wireless options in your area. STARLINK is pricey but it is literally 100 times better than DSL. Even last gen satellite internet HUGHES) is 25 times better. Kuiper is still a year+ away so there’s no telling what Amazon will charge. But in the meantime, several of the Telcos are offering 5G internet to the home at tolerable rates. ($50 a month where available.)

          https://www.forbes.com/home-improvement/internet/best-5g-home-internet/

          As to fretting about a Carrington event, that one won’t spare offline data. If that’s your concern you’ll have to print everything out. 😉

  3. It depends on your job. Based on my limited playing with the new Bing so far, it has no impact on my job. The one time I’ve asked it to write code, it got it wrong (but one of its links pointed to the same place I’d already found, via normal search, that lead me to write correct code).

    Then again, I’m writing specialized code with, at times, the required information available ONLY by asking the manufacturer. And a bunch of other tasks that don’t involve coding.

    I can see how it can have a major impact, good or bad, on people like, say, e-book cover designers.

    On the programming side, I think it’s impact will be like VB6: it will help a lot of people to write code to get what they want done, but that code will not be maintainable, often have security holes, and so on. Again, good if you know where and when to use it, but in the end, not good for the people who will use it past its limits. And it won’t lead to software innovation, because LLMs base their responses on the available answers.

    • Might it not help the manufacturer?

      Regardless, neither OpenAI nor Microsoft are pretending their separate tools are for everybody.
      That’s the fearmongers in the media and ivory towers. The latter have been burning billions of Meta and Alphabet money with nothing to show on the bottom line.

      (BTW, the majority of MS investment is in-kind in return for a share of OpenAI profits. OpenAI had better remember how MS built Internet Explorer in the first place.)

Comments are closed.