AI Isn’t Really Artificial Intelligence

This content has been archived. It may no longer be accurate or relevant.

From Tech Register:

At its core, today’s AI is incapable of comprehension, knowledge, thought, or “intelligence.” This name is little more than a marketing gimmick.

Nothing’s easier to sell than a product with a good name. The technology that we call “artificial intelligence” is extremely complicated, but thanks to its name, you already have an idea of what it does! There’s just one problem; AI isn’t “intelligent” at any level, and corporations aren’t interested in correcting the public’s misconceptions.

There’s Nothing Intelligent About AI

Artificial intelligence is a longstanding staple of pop culture and real science. We’ve spent nearly a century pursuing this technology, and the idea of “living machines” goes back thousands of years. So, we have a pretty clear understanding of what someone means when they say “artificial intelligence.” It’s something comparable to human intelligence—the ability to comprehend, adapt, and have novel ideas.

But the technology that we call “artificial intelligence” lacks these qualities. It cannot “know” or “think” anything. Existing AI is just a mess of code attached to a big pile of data, which it remixes and regurgitates. You can ask ChatGPT to write you a resume, and it’ll spit out something based on the resumes in its dataset (plus whatever info you share). This is useful, it automates labor, but it’s not a sign of intelligence.

Of course, ChatGPT is a chatbot, so it can feel very “human.” But most AI applications are non-conversational; they don’t talk or answer questions. And without the veneer of a conversation, the lack of “intelligence” in AI is very noticeable.

Take Tesla’s self-driving cars, for example. Elon Musk has spent nearly a decade pretending that Tesla Full Self-Driving is just a year away—it’s almost ready, and it will be 150% safer than a human driver! Yet this AI program continues to linger in beta, and every time we hear of it, Full Self-Driving is criticized as a safety hazard. The AI isn’t even smart enough to do its job.

For a more down-to-earth example, just look at robot vacuums. They collect a ridiculous amount of data on your home in the name obstacle avoidance and navigational AI. And while these AI-enabled robot vacuums are an improvement over what we had in the past, they still have a ridiculous amount of trouble with basic obstacles, like dog poop, kids’ toys, and small rugs.

Ordinary people, including a large number of people who work in technology, don’t know anything about AI or how it works. They just hear the phrase “artificial intelligence” and make an assumption. These assumptions may seem inconsequential, but in reality, they are a guiding force behind technological development, the economy, and public policy.

This Technology Is Useful, but The Marketing Is Nonsense

I don’t want to downplay the importance of AI or machine learning technology. You interact with this stuff every time you use your cellphone, search for something on Google, or scroll through social media. Machine learning drives innovation in physics, it contributes to “Warp Speed” vaccine development, and it’s currently making its debut on the battlefield.

But the term “artificial intelligence” is plastered on this technology for marketing purposes. It’s a flashy name that tells customers and investors, “our product is futuristic and has a purpose.” As explained by AI researcher Melanie Mitchell in a conversation with the Wall Street Journal, companies and engineers routinely slap the name “AI” on anything that involves machine learning, as the phrase is proven to illicit a response from investors (who may know very little about technology, let alone AI).

This is something that you can see in nearly every industry. Just do a Google search for a company name and add the term “AI.” You’ll be shocked by the number of businesses that brag about their AI pursuits in vague language, with zero proof that this technology has actually contributed to their profitability, productivity, or innovation.

And, as noted by Dr. Mitchell, this same marketing tactic was utilized in the 1970s and 80s—companies and engineers secured massive amounts of funding with the promise of “artificial intelligence.” Their research was not a waste of money, but it wasn’t profitable, so the funding dried up. (Of course, software is much more important today than it was in the 20th century. The term “artificial intelligence” is now attached to useful products and processes, so people are less likely to lose interest.)

In some ways, I think that the name “artificial intelligence” is a good idea. Companies spent a good decade calling everything an “algorithm,” which only led to confusion and frustration among the general public. The pivot to “AI” generates a lot of enthusiasm, which should lead to a more rapid development of automated software technologies.

But this enthusiasm hides the fact that “AI” is a complicated, confusing, and narrow technology. People readily assume that today’s “AI” is similar to what we’ve seen in pop culture, and very few corporations are willing to fight (or comment on) this misconception. (That said, social media weirdos are the biggest offenders. They make the most extreme and patently false claims about AI, which are amplified and consumed by people who don’t know any better.)

. . . .

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off. Chatbots will write the news, robot arms will perform heart surgery, and super-strong androids will commit all of your favorite OSHA violations while constructing suburban homes. But in reality, the technology that we call “AI” simply offsets labor.

In some ways, the offset of labor created by AI is very obvious. This technology doesn’t comprehend a single thing in existence, so in order to make it perform a task correctly, it requires constant training, testing, and troubleshooting. For every job that an AI replaces, it may create a new job.

Many of these new jobs require expertise in machine learning. But a large number of workers involved in AI development perform “menial” labor. OpenAI was caught paying Kenyan workers less than $2 an hour to help remove racism, sexism, and violent suggestions from its chatbot. And Amazon’s Mechanical Turk, which performs tasks using “AI,” often pays a few pennies for a human to complete the work instead.

Link to the rest at Tech Register

PG isn’t convinced by the OP.

It’s not difficult to debunk a new technology. PG remembers experts who ridiculed the idea that every person would have a computer on her/his desk.

That was true. For awhile.

We have them on our wrists and in our pockets and backpacks now.

Per the OP, PG has never read or heard anyone involved with AI research claim:

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off.

Putting words in the mouths of those one is attempting to scorn is a centuries-old practice.

Ironically, considering the view of the OP, an Australian/Iberian team has been experimenting with the design and implementation of different AI models to identify repeated potential false claims made by politicians in Spain and Australia. The system is called ClaimCheck.

35 thoughts on “AI Isn’t Really Artificial Intelligence”

  1. I’m reminded of the distinction between activity and results. When results matter, nobody cares about the machine’s mental and emotional state.

    If a machine pumps out a good book that people enjoy reading, few will care if its production was an emotionally wrenching struggle for some author. For those who do, make up an author name and bio.

    • Odds are the human controlling the software will do that before sending the book out.
      No need to reveal how the sausage is made; oversharing buys nothing but trouble.

    • I expect we’ll see fusion in basements long before.

      Meaning, “never”. HA!

      That leads to this:

      Lena
      https://qntm.org/mmacevedo

      BTW, The TV series “Lost” was about growing, harvesting and training human “snapshots” for use in a Galaxy wide civilization. They start with billions of harvested humans that are then copied and retrained in the sideways as many times as needed to supply their industrial needs.

      • If your idea of fusion is TOKAMAK, then not this century if ever. Which is why most savvy countries (Sweden, UK, Japan, Korea,even France) are pivoting to small modular fission reactors.

        However, Tokamaks are ornithopters. (“Birds fly by flapping wings, so let’s build aircraft that flap wings.” Langley and the Wright brothers instead figured out the *principles* that allowed flapping wings to work and paved the way to airplanes.) The sun is a hot ball of plasma generating heat, so Tokamaks try to control a hot ball of plasma, hoping to extract heat to move turbines.

        There are other ways to do fusion via first principles than hot plasma balls. In fact Philo Farnsworth was doing fusion a century ago. The trick isn’t achieving fusion, but rather doing it safely at a scale that is useful.

        While governments try their big science approach, a dozen private efforts in the US, UK, and Canada are spending private venture capital money on alternatives using known particle physics and electromagnetic principles to do fusion. One approach doesn’t even bother with turbines or heat engines and instead takes its cues from MagnetoHydroDynamics to extract electricity directly. Their design is a 30ft long cylinder.

        https://en.m.wikipedia.org/wiki/Helion_Energy

        Lockheed martin is working on a “compact” reactor that might power a small city (or aircraft carrier) from the back of a truck. It appears to be derived in part from Robert Brussard’s WiffleBall POLYWELL concepts the Navy was exploring until sequestration cute their budget.

        At Los Alamos they used giant lasers to do fusion and in the UK they are looking to use artillery. (seriously: they use gunpowder to propel a piston.)

        All these and more are looking to be running at least pilots long before the boondoggle ITER fusion project even finishes construction of a “technology demonstrator” ca 2035.

        Much like machine learning, these projects are aiming to solve real world problems rather than worrying about imitating the sun or human cognition. Whatever gets the job done, right? And some might scale downwards once we get room temperature superconductors.

        Which should also arrive before true AI.

        • I’m gonna say something controversial here, but hydrogen “fusion” is one of those great fantasy power sources that are useful in SF, but does not exist in nature. No one has ever taken hydrogen and fused it into helium.

          – Helium is a decay particle, called an “Alpha Particle”. It is not produced by “fusion”.

          If you read the original paper by Bethe it is all theoretical and has never been shown in reality in a lab, so a hydrogen “fusion” reactor cannot be built since it is impossible in nature. If it were physically possible we would have built one by now.

          Bethe won the Nobel Prize for “Energy Production in Stars”
          https://www.nobelprize.org/prizes/physics/1967/bethe/lecture/

          This is a false statement:

          – In 1938, Hans Bethe proved that fusion produces the enormous energy emitted by stars.

          – He didn’t “prove” anything.

          Here is the lecture
          https://www.nobelprize.org/uploads/2018/06/bethe-lecture.pdf

          Read through the lecture, and there is no mention of actual physical experiments fusing hydrogen into helium, and they have never done so in any experiment.

          What is fun about the lecture is mentioned right at the beginning on the old model of the Sun which was based on gravity compression.

          – The Night Lands by William Hope Hodgson was based on that concept.

          Here is the paper by Lord Kelvin:

          On the Age of the Sun’s Heat
          https://zapatopi.net/kelvin/papers/on_the_age_of_the_suns_heat.html

          This is an overview of:

          How the sun shines
          https://www.nobelprize.org/prizes/themes/how-the-sun-shines/

          – All the discussion is pure fantasy, not supported by actual science.

          As I said, there are great stories using the fantasy power source of “fusion”, of all kinds: from Bussard Ramjet(Tau Zero) to the Fuel pellet drives of The Expanse, but you will not find “fusion” power or drives in any of my stuff.

          • FWIW, none of the ongoing fusion efforts are focused on H-H.
            (You need stellar masses or absurd temperatures and magnetic fields for containment. Plus it’s a two stage process: H-H–>D followed by D-D. Unlikely to reach breakeven. Strictly SF.)

            The real world targets are all D-T and D-He3.
            The latter is harder but it is cleaner because it doesn’t produce free neutrons like D-T which irradiate the device walls.
            D-He3 is the simplest, likeliest to achieve, aneutronic fusion reaction:

            https://en.m.wikipedia.org/wiki/Aneutronic_fusion

            P-Li7 and P-B11 are both posibilities for second generation aneutronic reactor but P-B11 is the prefered reaction, all else being equal because Boron is all over and dirt cheap.

            As I said, fusion isn’t hard to achieve: it’s not a physics problem. It’s an engineering problem. Those tend to get solved with better tools, materials, processes or combinations there-of. The only question is finding the right mix.

            (It’s why I’m keeping an eye on Helion. They’re using a variation of particle accelerator tech to collide and compress ionized plasmas. As long as the magnetic fields hold they should be able to reach a commercially viable pulse rate, both D-T and D-He3.)

            Queue up Clarke’s First Law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

            Time makes liars of most eminences.

            • “Aneutronic fusion”

              Okay, that got interesting as I chased down many rabbit holes.

              – The diagram they show in the wiki page fits the concept of a “fusion” drive thrust without the problem of neutron radiation.

              That makes sense because the so called “hydrogen” bomb is actually a “fusion” device that fuses lithium deuteride, not “hydrogen”.

              Thanks…

              Track down, Starfire by Paul Preuss. Kindle has it for $1.99:

              Starfire (Paul Preuss novel)
              https://en.wikipedia.org/wiki/Starfire_(Paul_Preuss_novel)

              I’ve had the book on my shelf since it came out, but I only read it again last year.

              The book goes into detail focusing on hydrogen “fusion” with the lithium generating the tritium so that they can have tritium and deuterium “fusion”, with the lithium as added reaction mass.

              – If they had used lithium deuteride as “Aneutronic” fusion instead, it would have made more sense, but it came out in the 80s so they probably didn’t know.

              For my stuff, I see using something like John Varley’s “squeezer” that he used in his “Thunder and Lightning” series. Then I could have a boron fusion drive. Now that kind of “fusion” drive I can work with. It would have the flavor of The Expanse.

              As an aside, Starfire has an odd character development that I’m trying to understand, so I need to read it again, otherwise I would not have remembered the drive they use.

              • In the best of all worlds I would like to see the Polywell electrostatic confinement conceot succeed because it uses primarily electric fields and might achieve fusion in a compact sphere just a few feet in radius. The ESA is also looking into the concept for a fusion drive.

                https://www.esa.int/gsp/ACT/projects/polywell/

                No shock given Brussard’s “ramjet” space drive, most notably used in Poul Anderson’s masterful TAU ZERO.

                I once ran into an online “debunking” of Polywell claiming it would “never work” because it doesn’t produce neutons. 😀
                (Their prefered process is P-B.)

                BTW, the Helion prototypes look a lot like the warp core in ENTERPRISE. A horizontal cylinder.

  2. Perhaps PG has never heard an AI researcher proclaim that AI will lead to a utopia where humans sit on their hands all day… but (a very, very bad) nineteenth-century writer of a scientific romance did, indirectly referring to advances in Babbage engines. Bellamy’s Looking Backward posited a universal four-hour workweek a year before HAL would have demonstrated that a 24/7 workweek isn’t good for mere machines, either.

    Which ignored the need for mechanics to service the Babbage engines controlling the automated farms and factories, but ignoring maintenance and logistical reality is part of the AI movement, too (or at least was when I was on its very scary, uncontrolled periphery).

    • More recent fantasists have posited self-repairing machines and “repair droids”, reducing human mechanics to slave supervisors. I.e., R2-D2 and brethren, the little boxy robots in the death star and other installations, Star Wars is infested with the things.

      Plenty of space operas play with the concept of automated repair ships and factory ships. Of particular note, David Weber’s EMPIRE FROM THE ASHES. Fun read but way over the top in handwaving the “how do we get there from here” factor. Perfectly fine in fiction but eye-rollingly stupid thinking in the real world where the Law of unintended consequences rules. “Humans will soon be immortal!” “The Singularity is coming!” ” Humans will be obsolete!”

      Right.

      Academic tech pundits are, as a class, clueless about real world considerations of new developments and only useful for retroactive mockery. 😉

      • R2-D2 is a poor example because except when there’s actual, visible fire — a bad thing on a spacecraft — its/his† work has to be prioritized and directed. It/he has to be told to “lock down that stabilizer”… or, presumably, to put another batch of bantha-cucharron in the fryer. In short, R2-D2 is somewhat-skilled labor with external management and little initiative, just like upper management at Whitey’s thinks of its kitchen staff.

        † This simulaneously underinclusive and overinclusive anthropomorphism is here on Monday morning, without enough caffeine, with malice aforethought to enrage fandom, to offend those who need to be offended, and to drag in the “well, what if an AI decides to be a drag queen on karaoke night at the Cantina?” imagery that you can never unsee. You’re welcome.

            • I love the KOTOR games, especially KOTOR 2. One of these days, I really must fire up the online version of KOTOR and see if my old characters are even still on a server somewhere.

              HK-47 is an excellent example of the “affably evil” trope, and a useful jumping off point if one wants to design a villain who is not a mustache twirler. A good villain may have thought-provoking and useful insights to challenge a hero.

    • The Roomba comment was the point when I decided the OP needed to be posted on TPV because it was so over-the-top, E.

  3. No intelligence in evidence here:

    https://www.msn.com/en-us/news/technology/marines-fooled-a-darpa-robot-by-hiding-in-a-cardboard-box-while-giggling-and-pretending-to-be-trees/ar-AA16AjAD?cvid=0602d12682114cf8b1215313a683a9b2

    “The state-of-the-art robots used by the Pentagon had an easily manipulated weakness, according to an upcoming book by a former policy analyst: Though they’re trained to identify human targets, the bots are easily fooled with the most lackluster of disguises.

    “In his upcoming book, “Four Battlegrounds: Power in the Age of Artificial Intelligence,” former Pentagon policy analyst and Army veteran Paul Scharre writes that the Defense Advanced Research Projects Agency (DARPA) team trained its robots with a team of Marines for six days to improve its artificial intelligence systems.

    “Eight Marines placed the robot in the center of a traffic circle and found creative ways to approach it, aiming to get close enough to touch the robot without being detected.

    “Two of the Marines did somersaults for 300 meters. Two more hid under a cardboard box, giggling the entire time. Another took branches from a fir tree and walked along, grinning from ear to ear while pretending to be a tree, according to sources from Scharre’s book.
    Not one of the eight was detected.

    “The AI had been trained to detect humans walking,” Scharre wrote. “Not humans somersaulting, hiding in a cardboard box, or disguised as a tree. So these simple tricks, which a human would have easily seen through, were sufficient to break the algorithm.”

    Though it is unclear when the exercises in Scharre’s book took place, or what improvements have been made to the systems since, DARPA robots’ antics have long faced obstacles to their performance, including poor balance and concerns over their potential to cause accidental killings due to AI behaving in unpredictable ways.

    “The particular experiment this appears to reference, part of our Squad X Core Technologies program, was an early prototype test to improve squads’ situational awareness by detecting approaching people,” a DARPA spokesperson said in an email to Insider. “In the next experiment, our team demonstrated a detection ability that exceeded expectations in a far more challenging environment. This is the nature of high risk and actively managed programs. We have not read the book but, in general, we are constantly testing and experimenting with new technologies.”

    The spokesperson added: “There is certainly still much work to be done to advance AI capabilities and we’ve seen tremendous promise in its national security applications.”

    Uh, yeah.

    • Thank you for this find. I read the transcript, because it’s faster 🙂

      They’re spot on about writing to algorithm. This is why I always bring up Terry Pratchett when discussions about AI taking over writing comes up: to write like Pratchett, you have to have a soul: a mind, will, emotions. You have to be able to look at life diagonally, which isn’t something you can program a computer to do.

      The first time I’ve heard about AI having weird missteps like the tree example below, I thought it was a good thing, because it may deliver a dose of humility in scientists. We can program the perfect person, the perfect society, says hubristic wanna-be masterminds. But AI that can’t tell trees from people shows we don’t quite understand how humans think, how we perceive the world, or even how we know what we know.

      In the talk at the link they make a great point about music. Ever watch Epic Rap Battles of History on YouTube? They throw in vocabulary and wordplay that you don’t hear in modern songs: Bucephalus (the one with Alexander the Great). Or Robert Oppenheimer telling Thanos he’ll split “U like and 2 and 3 from 5.” Do songs pull off conceits like Prince’s “Little Red Corvette” anymore? Or flights of fancy, like Freddy Mercury’s ode to a painting, where he speaks of a tatterdemalion and includes a harpsichord in the music?

      That kind of music is rare in current pop culture, and people notice. They notice when something is “factory produced” versus made with heart. There’s a song floating around YouTube called “Plastic Love” by Mariya Takeuchi where the singer takes a full 40 seconds to begin singing. (The song is from the 80s, although for some reason they only just now created an official video for it). And I’ve seen people comment on that in a positive way, and wonder why current day pop songs cater to short attention spans.

      AI will not replace musicians and songwriters who can do their own “Fairy Feller’s Master-Stroke.” Writers who can only write to formula, and cannot add their own wit and verve? Replaceable. I’m going to decline to worry about that in hopes that AI prompts a re-think in how education is approached. It’s long overdue. And soon enough, it won’t be avoidable: McDonald’s now has computers to take your order.

        • ‘Cheap’ is a relative term. The California legislature, bless the place where its brains ought to be, has just raised the minimum wage to $22 per hour, but only for jobs in chain restaurants with more than a specified number of outlets nationwide. (I believe the threshold is 100.)

          Kitchen robots (if they work as promised) won’t make White Castle’s business cheaper to run. They will make it less expensive than it would be to comply with the new California law. As a side effect, they will drive the learning curve on such robots towards the point where it may actually become cheaper to use them than it is to pay humans the minimum wage in some other jurisdictions.

          (Definition of ‘side-effect’, if you’ll pardon me for quoting myself: ‘The principal effect of an action, when ignored by persons who would rather direct their attention to a small incidental benefit.’ The small incidental benefit is that some fast-food workers in California will get a fat pay rise when all the others are sacked.)

          • 1- The White Castle robots are based on decades-old off the shelf hardware tech. What is new is the software, which isn’t a big leap. What they’ve been testing is the economics. Cooking fries isn’t particularly challenging to robots with automotive assembly line heritage.

            2- The California law isn’t just about the fast food minimum wage, but also about working conditions and vaguely worded in ways that allow state regulators to control day-to-day operations. (Note the composition of the regulation agency.) It will also make firing employees (“retaliation”) much harder and thus incentivize minimizing hiring, an endemic effect in continental europe.

            https://www.msn.com/en-us/foodanddrink/foodnews/californias-fast-food-bill-was-just-signed-heres-what-happens-next/ar-AA11wY2D

            3- Reduced fast food employment is not necessarily an incidental effect. After all, reducing employment opportunities for the minimally skilled will increase the number of wards of the state dependent on government largesse and thus vote for the party with the supermayority, finalizing California’s evolution into a single party state.

            4- The fast food kitchen robots and computer based ordering functions are further evidence that “AI” is a solution to human stupidity.

            • #3 in particular. I have warned people in the past who were banging on about making the minimum wage $15 an hour that if they were successful, they were going to cut off the ladders of opportunity for young people and the poor. That the McJobs will be done by robots instead. At the time I attributed their reactions to naivety, if not outright stupidity about basic math and human thought processes.

              But of late, #3 is a viable option on the table.

          • Heck, McDonalds already has robots at work. Little ones that you don’t really notice.

            When I put in a drive-through order that includes a soft drink, a little carousel by the window plops down the right size cup, advances it, fills it properly with the ordered soda, and advances it along. When the order filler comes to the window with my food, the soda is waiting for them to cap it and hand it to me. The robot was driven by the order taking mechanism, the order filler does nothing but cap it and hand it to me. That’s a robot, granted it doesn’t “think”, but it counts.

            Felix is of course correct, if someone says “augmented by machine learning”, then they have some NN in use somewhere. If they spout about AI, the marketers are in control.

  4. I think we are dealing with something different from human ‘intelligence’ here. Awareness has many levels, such as social skills, interpretation of primate behavior, and code-reading. These machines seem good at code-reading and proofing, which alone will make them a threat to thousands of human jobs. As for the other ‘skills’ involved in human intelligence, AI has a long way to go. But give it a decade or two.

    • Nah.
      Machine Learning is a replacement for human stupidity, not human intelligence.

      Cyril Kronbluth might be right, after all; we do seem headed for a world were a tiny minority of competents keeps the world running despite the “best” efforts of idiots all over. Without these new tools amplifying the productivity of the competent the world would come to a halt. Note which areas are being targetted: businesses with competent labor shortages like trucking and mining.

      Anybody whose job can be *better* performed by a machine (mechanical or software) should be doing something else or learning to do something else. Key word: better.

    • People were saying ‘Give it a decade or two’ in 1980.

      PG to the contrary, today’s so-called AI is not comparable to the aeroplane that flew at Kitty Hawk. Heavy R&D work in this field has been going on for more than sixty years, with much of the funding driven by the misnomer by which the field is known.

      If you want a historical comparison, I would suggest that AI researchers are like mediaeval alchemists, and human-equivalent machine intelligence is like the transmutation of base metals into gold. The alchemists invented the science of chemistry as they went along, but they never did find what they were looking for, and we now know it is a physical impossibility to transmute elements by any chemical means.

  5. As I have more than once said in these comments that AI is a misnomer and that no intelligence is involved, I have to agree with the writer and suggest that PG’s example of the reception of PCs is missing the point. Noting that the emperor has no clothes – or as the writer does that AI has no intelligence – is not to debunk the technology, just calling out the marketing hype.

    I must admit though that, of late, events have leave me with some doubts, not about the lack of I in AI, but as to whether the world is suffering from a lack of non-artificial intelligence. Maybe a lot of my fellow humans’ brains are just running on AI like neural algorythms?

    • Agreed 99%
      The 1% is that I see no intelligence at all in software. With the caveat that first we have to define intelligence. If we minimize intelligence solely to “problem solving ability” then maybe. (Human Intelligence is actually abundant but not in the media. Or politics.)

      One thing the handwringers forget is the General Intelligence of fictional AI’s includes initiative and self-awareness. The latter is hard to validate externally but not the former. A true AI must be capable not just of carrying out a mission without human instruction but also of choosing a mission arbitrarily without human instruction or definition. (As in giving it a list of projects for it to choose from. Not valid.) The Google idiot deserved to be fired for forgetting that.

      The proper term for the range of technologies misnamed “AI” is machine learning. And indeed, most companies delivering end user applications derived from Neural Networks properly refer to their systems as machine learning.
      What is marketed under “AI” is a variety of very different applications that broadly rely on emergent software algorithms produced by a separate software “model” trained by Neural Networks analyzing massive datasets. The bête noire of the day, ChatGPT, is just one such algorithm produced by the latest version of OpenAI’s GPT-3 (3rd generation Generative Pre-Trained) model. So is DALL-E. So are dozens (hundreds? thousands?) of actual and potential applications that can be coded to apply that model to arbitrary datasets. The apps are each independent of each other.

      https://www.techtarget.com/searchenterpriseai/definition/GPT-3

      To add to the fun, GPT-3 is just one of the infinite variety of models that can be produced by OpenAI (or other organizations) neural networks. IBM, META, GOOGLE, AMAZON, China, and a horde of startups globally have been creating and exercising Neural Networks and applying them to pretty much everything, everywhere. Still no intelligence nor anything resembling sentience. But very useful already. Which is nothing compared to what is coming.

      Things like this:
      https://scitechdaily.com/discovering-hidden-archaeological-sites-with-ai-and-satellite-images/

      Or like this:

      https://www.deere.com/en/sprayers/see-spray-ultimate/

      Or this:

      https://www.popularmechanics.com/technology/infrastructure/a29131330/automated-construction-equipment/

      Note that neither application is vaguely related to language processing or image assembly. Instead they rely on machine vision and analysis. And robot control systems. That is where the big money lies today. The example the OP gripes about–TESLA’s robotic driving software–is an extreme example of this kind of machine learning and probably the hardest because of the risk to the car *from* the unpredictable actions of the cars under “meatbag” control. Even now there are hundreds of robot mega-trucks working in mines in australia and other develooed nations.

      https://www.zdnet.com/article/giant-robot-trucks-are-now-mining-gold/

      The media is hyperventilating over what is and will remain the least impactful use of machine learning. Which is par for the course given their parochial mindset.

Comments are closed.