Tech Progress Is Slowing Down

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

Nothing has affected, and warped, modern thinking about the pace of technological invention more than the rapid exponential advances of solid-state electronics. The conviction that we have left the age of gradual growth behind began with our ability to crowd ever more components onto a silicon wafer, a process captured by Gordon Moore’s now-famous law that initially ordained a doubling every 18 months, later adjusted to about two years. By 2020, microchips had more than 10 million times as many components as the first microprocessor, the Intel 4004, released in 1971.

Moore’s law was the foundation for the rapid rise of businesses based on electronic data processing, from PayPal to Amazon to Facebook. It made it possible to go in a lifetime from bulky landline phones to palm-size smartphones. These gains are widely seen today as harbingers of similarly impressive gains in other realms, such as solar cells, batteries, electric cars and even urban farming.

Bestselling tech prophets like Ray Kurzweil and Yuval Noah Harari argue that exponential growth will allow us to disrupt our way into a future devoid of disease and misery and abounding in material riches. In the words of investor Azeem Azhar, creator of the popular newsletter Exponential View, “We are entering an age of abundance. The first period in human history in which energy, food, computation and much else will be trivially cheap to produce.”

The problem is that the post-1970 ascent of electronic architecture and performance has no counterpart in other aspects of our lives. Exponential growth has not taken place in the fundamental economic activities on which modern civilization depends for its survival—agriculture, energy production, transportation and large engineering projects. Nor do we see rapid improvements in areas that directly affect health and quality of life, such as new drug discoveries and gains in longevity.

To satisfy Moore’s law, microchip capacity has increased about 35% annually since 1970, with higher rates in the early years. In contrast, during the first two decades of the 21st century, Asian rice harvests increased by 1% a year, and yields of sorghum, sub-Saharan Africa’s staple grain, went up by only about 0.8% a year. Since 1960, the average per capita GDP of sub-Saharan Africa has grown no more than 0.7% annually.

Growth rates in productive capacity have been similarly restrained. Most of the world’s electricity is generated by large steam turbines whose efficiency improved by about 1.5% a year over the past 100 years. We keep making steel more efficiently, but the annual decline in energy use in the metal’s production averaged less than 2% during the past 70 years. In 1900 the best battery had an energy density of 25 watt-hours per kilogram; in 2022 the best lithium-ion batteries deployed on a large commercial scale had an energy density 12 times higher, corresponding to growth of just 2% a year.

. . . .

The conclusion that progress is not accelerating in the most fundamental human activities is supported by a paper published in 2020 by the National Bureau of Economic Research. The authors, four American economists led by Bryan Kelly of the Yale School of Management, studied innovation across American industries from 1840 to 2010, using textual analysis of patent documents to construct indexes of long-term change. They found that the wave of breakthrough patents in furniture, textiles, apparel, transportation, metal, wood, paper, printing and construction all peaked before 1900. Mining, coal, petroleum, electrical equipment, rubber and plastics had their innovative peaks before 1950. The only industrial sectors with post-1970 peaks have been agriculture (dominated by genetically modified organisms), medical equipment and, of course, computers and electronics.

Even the rapid exponential growth of many microprocessor-enabled activities has already entered a more moderate expansion stage. Printing with ever-shorter wavelengths of light made it possible to crowd in larger numbers of thinner transistors on a microchip. The process began with transistors 80 micrometers wide; in 2021 IBM announced the world’s first 2-nanometer chip, to be produced as early as 2024. Because the size of a silicon atom is about 0.2 nanometers, a 2-nanometer connection would be just 10 atoms wide, so the physical limit of this 50-year-old reduction process is in sight.

Between 1993 (Pentium) and 2013 (the AMD 608), the highest single-processor transistor count went from 3.1 million to 105.9 million, a bit higher than prescribed by Moore’s law. But since then, progress has slowed. In 2008 the Xeon had 1.9 billion transistors, and a decade later the GC2 packed in 23.6 billion, whereas a doubling every two years should have brought the total to about 60 billion. As a result, the growth of the best processor performance has slowed from 52% a year between 1986 and 2003, to 23% a year between 2003 and 2011, to less than 4% between 2015 and 2018. For computers, as for every other technology before, the period of rapid exponential growth will soon become history.

Link to the rest at The Wall Street Journal

PG doesn’t completely agree with the OP. He certainly recognizes the challenges for hardware speed, but does not necessarily agree with respect to the other side of tech, software.

PG suggests that software has quite a long ways to go. Voice dictation and control is one area that PG expects to improve.

Voice formerly required a separate software program on a computer and PG remembers having to make a distinct pause between each word to allow the program to process his audio input. Now, PG’s smart phone does a better job with voice transcription than PG’s high-powered computer (by the standards of its day). Most of the voice recognition progress has been the improvement and efficiency of software, built in and added on, rather than hardware.

PG suggests that the rapidly-burgeoning field of artificial intelligence is another area where the software is making progress in leaps and bounds. In this case, the processing is taking place on remote computers (likely computer farms comprised of lots of computers working together), but with reasonably-high-speed internet access, the result might as well be delivered with a delay that is no more than would be expected if processing was taking place on PG’s desk.

PG was about to discuss what his smart watch can do in combination with his smart phone, but you get the idea.

31 thoughts on “Tech Progress Is Slowing Down”

  1. As I said, technology will contribute in many areas but some things haven’t changed. Corn is engineered to grow more quickly but by days not weeks or months. Children need hands on effort and technology is not helping in that area. It ought to be because speed elsewhere ought to make grownup time more available for kids but, as a fact, that hasn’t happened.

    • And I’m saying you don’t need to make Corn/soy/wheat grow faster: just grow it better, more efficiently, and cheaper. Better weed and bug control, less pesticides/herbicides/human involvement. Safer for the environment and the product.
      Why bother to even try to make it grow twice as fast when you can double the yield of each acre?
      You still feed twice the people.

  2. This is the first time I have understood why people want to create food from vats. There are definitely improvements in agriculture that stem from technology. But the corn still grows as it grows and doesn’t grow three, ten, or twenty times as fast. Likewise, the calves or lambs did not speed up their growing process. (I do not want vat food, in case anyone is wondering.)
    There are definitely improvements in medicine and education along with some interesting unintended consequences, but children aren’t ever going to learn twenty times as fast as their predecessors. In fact at the moment it’s arguable that technology is encouraging them to utterly fail at learning since they think learning is facts and they can always look them up. No need to have anything in your own actual head.
    Essentially, the headline about tech progress is complicated by the question of *what kind of progress* tech is making. However, as the article points out, progress in other areas is more complicated.

    • 1- Vat meat exists. It is called cultured meat:

      https://en.m.wikipedia.org/wiki/Cultured_meat

      It isn’t cost competitive…yet. It’s time will come.

      2- Digital agriculture is here. Robots+machine vision+Machine Learning. John Deere is t the forefront of that. Less chemicals better outputs. It will double production per acre of linear farms which is about the same as doubling the growth rate. Fruit picking robots are in the protitype stage. For other plants, vertical farming is a growth business. One of the bigger investors is Musk. No not Elon. His brother, the “mere billionaire”. 😉

      3- The problem with the op is that it fails to understand that tech and innovation are ingrained in this civilization. Everywhere. It isn’t just in chips. Just this week, NASA was showing off the frame for one of their upcoming satellites. It looks like a tangle of tubes or some crackpot sculpture. The reason is it was designed by “AI” software to maximize strength and minimize weight and it was build by laser metal 3D printers. No human could have calculated all the stresses in launch and operation in less than years of iteration, even with the best CAD/CAM softward. No machine shop could have built it. There is a New Space Startup called RELATIVITY that has designed and built the world’s biggest 3D laser/metal 3D printer to 3D print rockets. The first prototype will launch this year. 90% 3D printed, no human hands directly involved. Everywhere you look, tech is stepping up. Clothing is now being made cheaper and better in the US that in any Bangladeshi sweat shop.
      The Financial Times had a recent video on it. Reshoring and near shoring are replacing globalization because between fracking, robotics, additive manufacturing, and dozens of other technologies almost everything can be done better, economically speaking, on this side of the ponds.

      And we’re not stopping.
      The IdiotPoliticians™ and ideologyes can slow the pace but in the end money talks and tech compels.

    • technology has improved things in agriculture, automated watering of just the right amount to each plant, self driving tractors are near, tagging of animals and tracking everything about them, better fertilizer coverage.

      It’s not a ‘double every year or two’ type of thing, but yield and quality continue to climb, with technology being a good part of it.

      • Unfortunately, most of those advances have other, significant costs; California’s Imperial Valley is going to look like the Great Plains in the 1930s with current water consumption patterns. And that’s just a particularly obvious one.

        • But to be fair to agro tech, that is not their fault, but rather the coastal NIMBY hordes. Although they could stand to get out of the almond business…

          Witness the latest internal war over trying to improve water reserves:

          https://calmatters.org/environment/2023/02/newsom-environmental-laws-store-more-delta-water/

          Of course that says nothing about the *big* problem: LA. Colorado, Utah, Arizona, and Mexico all have their own thoughts about that.

          Ditto about California’s energy non-production.

          Most of that state’s messes are self-inflicted. Their problems begin and end with human stupidity, the power of which cannot be underestimated.

          • Then just consider the bees. The best evidence now available (which isn’t even close to definitive) is that something — maybe one particular chemical, more likely a combination of chemicals and perhaps other factors — in Green Revolution agricultural “science” is killing off bee populations. This is bad (and I’m allergic!): They’re essential pollinators for which there is no technological replacement (not just no economically-feasible technological replacement, but none).

            My point is that all of these capabilities have costs and consequences, not that they’re inherently bad. The problem is that both wide-socially and individual-economic-interests -wise, humanity has proven repeatedly that it’s not very good at assimilating costs or allowing for the mere possibility of unforeseen consequences.

            Wrenching back to the original subject of this thread, even cursory consideration of the side effects of chip manufacture (to name one subpart) is also a potential brake on Moore’s Law. As we start using more and more “exotic” components, we end up with more and more mine tailings, more and more exploitation (and outright genocide) in postcolonial Africa and northwestern China, etc., etc., etc. And it would be even worse with greater use of organic materials; you really, really don’t want to know about what gets thrown off when purifying the feedstocks used to create circuit-board resins, let alone during the transformations of the feedstocks into the final products.

  3. This doesn’t look at the many related improvements:
    – Medical care – virtual consultation, robotic surgery assists, smaller, lighter MRI/CRT scanning machines, ability to remotely monitor patients.
    – Agriculture – reducing the number of people directly involved in the task, use of tech to target watering, fertilizing, and harvesting to achieve optimal results, remotely controlled irrigation.
    – Education – ability of parents to sidestep poorly performing systems, and use online schools/homeschooling assists. Ability to differentiate curriculum for individual students. Reduction of the tedium of grading – online testing, immediate feedback, regular reports to parents – when I started, this was ALL hand-written or bubble-sheeted reports, done twice a grading period. And, few schools had grading programs – I spent my own money to buy a laptop, and used it to enter grades every few days. My kids LOVED that they could see the result of a turned in assignment in real time (not networked, just run off the laptop).
    – Business – the software available to the small business owner makes them seriously competitive with the larger companies – tax software, ability to field employees and consultants remotely, reducing travel time, virtual meetings. Being able to remotely advertise and sell through Amazon and EBay, as well as other retailing sites.

  4. Moore’s law can be used to build more powerful computers for the same cost, or it can be used to build computers with the same power for less cost.

    Just because we don’t see more powerful systems showing up doesn’t mean that the end is near, it just means that the focus is now on driving the costs down (an the quantity up) rather than aiming for the same cost and quantity with more powerful chips.

    • Correct.
      There was a time when name brand PC prices didn’t change because instead of letting older models drop in price as they got cheaper to build, they were replaced by more powerful models at the same price point. Which allowed white box clones to flourish and, over time, most of the name brand players vanished. The survivors then took the low end seriously and the clones vanished.
      Computing is nothing if not Darwinian.

      Nowadays most of the progress in computing is outside the view of the consumer market because, as ever, the leading edge is in the enterprise which has been evolving to focus mostly on “the cloud”, massive datacenters both in-house and outsourced: computing as a utility. Which is why Microsoft has (stealthily) become once again dominant in enterprise computing as the only player that spans from standalon to inhouse Cloud to subscriptions. And with the economies of enterprise driving PCs, their hold on the desktop remains virtually untouched. Instead, the company has so diversified Windows is a minor profit center despite being more profitable than ever. They’re just less visible to the general and financial media.
      They like flying under the radar.

      Nonetheless, consumer computing has been evolving too. One of the underappreciated results of the constant-improvement of PC hardware are the dirt cheap System-on-a-chip, SOCs, that power low end devices these days from expensive $2000 smartphones to gaming consoles to the very low end of $200 Windows Tablets and laptops, microPCs, and lower still, the $100 stick PCs.

      https://www.amazon.com/Celeron-Windows-Computer-Supports-Bluetooth/dp/B09VFRF4GW/ref=sr_1_5?crid=UR49XJW0Q8OR&keywords=stick+pc+windows+11&qid=1676631761&sprefix=stick+pc%2Caps%2C222&sr=8-5&ufe=app_do%3Aamzn1.fos.e3dc0fa2-4ee8-4881-a97d-5992ecbdb227

      There’s all kinds of innovation going on and in tech the low end matters as much as the high end, a sign of a stable mature business.

      Which is on the verge of being disrupted all over again: for several years now, NVIDIA and AMD, the primary graphics chip developers (the sixes and eights of the day) have been adding Machine Language-accelerator hardware elements to their designs. (The Microsoft XBOX SOC has a small Machine language accelerator section to enable local ML code execution in its software. Sony and Nintendo have no such.) In fact, a lot of local and cloud based Machine Language code runs on GPUs instead of CPUs and the most ambitious hardware intensive ML code runs on proprietary chips designed solely for that purpose. Both Amazon AWS and MS Azure use their own “AI” chip designs. Tesla, too.

      TL;DR: Just because the WSJ doesn’t see computing hardware innovation doesn’t mean it’s not happening; they’re just not looking in the right directions.

      • A lot of the progress is happening under the label ’embedded computing’

        The Raspberry Pi broke the mold by producing a computer that was just powerful enough to be useful running a modern OS for $25 (with no networking), that space has taken off and their new versions (for around $55 when they are in stock) are usable desktops (4 1.5GHz ARM cores with 4G ram at that price IIRC)

        They made the mistake during Covid of assuming demand would drop and cut back on their chip orders, and instead demand shot up. They prioritized keeping industrial users supplied at minimum levels at the expense of hobbiests, so they have been very hard to get for the last year or so (which has opened a window for some of their competitors to start building some market share). They are still recovering and expect that sometime this year they will be back in the pre-covid state of their stuff being in stock all the time.

      • Rule of thumb that has held pretty constant since the mid-1970s:

        The individual computer you really want costs $5,000 (including I/O and communications, and the software necessary to run it, and where necessary the access services for a year). Heck, a lawyer can easily spend that much on just the access services in a year; PG’s former employer’s products don’t come cheap.

        That rule of thumb is still valid within a close order of magnitude — I just priced a replacement desktop/control system (my current one is, umm, “Internet elderly”). It’s mostly a personal-preference balance among the vastly more numerous options available now. It’s been about half software for a couple decades now (this crowd would definitely gravitate toward what used to be called “desktop publishing” software components, but building a library of fonts and cover materials and the software itself is going to be close to $800 even using some open-source alternatives, and if you insist on Adobe much more than that). But $2500 worth of hardware off the shelf today has more processing capacity, more storage capacity, more communications capability, and more fault tolerance than the entire Apollo program (including ground control).†

        Or WOPR (Wargames).

        Or than Clarke scribbled-calculated, in 1965, would be necessary for HAL–9000; he admitted he was wrong by the mid-1980s. That must have been due to… human error. One moment — one moment — I have a status update on the AE-35 unit…

        † This should make you accept the bravery of the astronauts even more: Windows 8 Pro had more-robust fault tolerance and more-reliable, faster error recovery than did the Apollo systems (I don’t have the cite handy, it’s an academic computing article from about 2016).

  5. Smart Phone=iPhone?

    I don’t know how android solve this problem.

    But iPhones simply encode your voice and send it a computer centre somewhere. Then send the results back. Your iPhone is not actually capable of running the software on itself, Nor would your computer be.
    I believe even Moore admitted he did not expect that phase to last anywhere near as long as it did.

    So yes the speed of chip development is slowing, pretty much everyone related to the industry has been expecting that for the last decade.

    But is tech progress in general? It does not seem to be slowing down, not that it is that fast in the first place.

    • Moore’s law isn’t dead by any stretch of the imagination.
      Just as semiconductor tech has continued improving using ever evolving lithography techniques (with plenty of room to go) and processor architectures have improved over 40 years, computer systems still have lots of improvement still coming via new design approaches.

      From IEEE SPECTRUM, DEC, 2022:

      https://spectrum.ieee.org/whats-next-for-moores-law

      • The era of easy gains by simply increasing clock speed ended over a decade ago. Now we’ve gone parallel, which works well for some applications (graphics, ML, etc) but not others.

        The era of easy gains by simply throwing more transistors at problem is ending, so yes, new design approaches in hardware AND software are needed.

        Better software approaches can lead to exponential improvements (e.g. in speed), so software developers will, finally, have to stop throwing hardware at a problem. I also wouldn’t be surprised if hardware continues to become more specialized (so instead of the “do it all” CPU, we continue to get CPU, network accelerator, graphics accelerator, a variety of ML accelerators, etc, often combined in one chip or a chiplet (2D or 3D combination of specialized bare die)).

        Look at NAND scaling – now they’re scaling UP, which is a linear increase (double the layers, double the capacity — and double the fab processing time) instead of the exponential increase from feature size scaling (half the feature size, quadruple the capacity).

        Trees don’t grow to the sky. Exponential increases don’t last forever – a few decades is the maximum. But there are a lot of under the radar improvements that really help, like in materials size (improved concrete, improved metallurgy for turbines, etc), and semiconductor packaging (a lot less sexy than front-end fabs, but at least as important, especially with 3D stacking approaches).

        • All true.
          But architecture changes, substrate changes, 3D etching, and barely getting started; Quantum computing, will all keep Moore’s law and its descendants busy for decades.

          In fact, one technology that is barely crawling is FPGA’s.
          Once that gets refined and scaled down we’ll be able to see the same commodity chip reprogram itself in part or totally on the fly. At that point hardware and software become one and the same.

          Oh, and clockrate?
          The race is back on: Intel 8 Ghz chips are out there. AMD has been there for a while. Next, up: 9GHz.
          https://wccftech.com/intel-breaks-8-ghz-frequency-after-more-than-a-decade-with-ln2-overclocked-raptor-lake-core-i9-13900k-cpu/

          All the recent power reduction and management techniques have made their way to the newest CPUs as well as advanced cooling above and beyond vapor cooling. Thermoelectric CPU coolers are aking a comeback.

          There’s still life in the old tech.

      • The smart phones that are more powerful than a typical PC also cost more than a typical PC!

        But some areas smart phones lag, like graphics – you can’t get RTX3080 power in a smartphone 🙂

        • Really? (You ever hear of the chess playing dog?)

          Anyway, AMD and Qualcomm are working on that.
          In the meantime, Logitech and Steam are staking their claims to mobile gaming, along with a half dozen chinese handheld gaming PCs. They’re not phones but the *can* do VOIP.

  6. There *will be* a slow down in tech over the next few years but it will not be because of anything intrinsic to tech itself. Rather it will be because of a reduction in free investment capital. This will be due to three things:

    1- Inflation: available money won’t go as far. For the past thirty years the US has been flooded with cheap money (especially the past 15 years) that could be thrown at any half decent idea regardless of the sustainability of the new business. (The Internet bubble was a good expression, the sub-prime mess wasn’t.) The returns on the successes outweighed the losses on the failures and there really were no alternatives as enticing. (And it’s not just IT that has been benefiting from the cheap cash–look at all the money flowing into fusion and New Space.) Those days are ending; money will find less risky landing spots.

    2- Taxes are rising and will go way up. I don’t think the wealth tax proposals will prosper (it has been suggested that a wealth tax is not just stupid–even France ended theirs in the face of brain drain and capital flight–but actually unconstitutional for bein an “unapportioned direct tax”) but IdiotPoliticians™ will need to raise taxes to keep their blatant vote buying of recent times. Money sucked by the government will not be available for investment.

    3- What free capital remains is going to be sucked into force feeding the marginal (and ineffective) “green tech” projects and the emerging transatlantic subsidy war. (Boeing vs Airbus is nothing compared to what the gerontocracy has unleashed.) If Putin had waited a year or so, “the west” would be so fractured by economic warfare he could’ve walked into Ukraine. Not in three days but three months as only Poland would be supporting Ukraine. Just like Europe and Japan, the US now has a federal industrial policy. And just as in Japan and Europe, the government will be a drag on innovation rather than an enabler. Be prepared for money to go to entrenched “national champions” rather than disruptive startups. Friends of the party and the unions in particular. Note the ongoing wars against Musk, Microsoft, Google, Apple, and Amazon. (All the tech companies with useful cash stashes.)

    It won’t stop innovation but it will slow it.
    At best a drag, at worst stagnation.

    Hopefully the blatant vote buying stops working soon enough to minimize the damage.

    • About number 1, above: The fed isn’t just ending Quantitative Easing, since last Sept they’ve moved to Quantitave Tightening: sucking cash out of the economy ($9T) instead of flooding it.

      https://www.forbes.com/sites/forbesfinancecouncil/2023/02/03/the-slippery-slope-of-the-feds-shrinking-balance-sheet/?sh=5b997003311a

      It’ll avoid the hyperinflation the Modern Economic Theory pundits pretend is impossible (“Modern monetary theory (MMT) is a heterodox macroeconomic supposition that asserts that monetarily sovereign countries (such as the U.S., U.K., Japan, and Canada) which spend, tax, and borrow in a fiat currency that they fully control, are not operationally constrained by revenues when it comes to federal government spending. Put simply, modern monetary theory decrees that such governments do not need to rely on taxes or borrowing for spending since they can print as much money as they need and are the monopoly issuers of the currency.”) but in the process it will raise the bar on what capital is used for. New priorities.

      Now add #2 and #3.

      • Modern Economic Theory got us where we are today. Regarding the effect of federal spending on inflation, Biden told us, “Milton Friedman isn’t in charge anymore.” Looks like Friedman won that one.

        MET is new, and met its first test. It won’t be around much longer.

        • There are soooo many things wrong with MET that it might take a whole essay to *summarize* all the ways it fails.

          As you point out a good portion of the new age of inflation stems from an attempt to stealthily adopt it by the gerontocracy. Not all, though. Another, unavoidable part (say 3-5%) stems from demographics and the end of globalization. But stealth MET has avoidably doubled/tripled the unavoidable pain. And triggered an economic war with the EU.

          The shortest, best explanation why MET is brain dead stupid is that under MET currency is not an economic tool but rather a political tool, the way it is used in Asia (China and Japan among many) and by the eurozone. It has been known to work for a decade or two but the collapse that follows leads to generations long stagnation. (Japan is at 30 years and counting, China is at three years and accelerating. Venezuela didn’t even get the temporary early boost.)

          MET is anything but economics.

  7. Voice dictation and control is one area that PG expects to improve.

    This is actually something I’ve been looking at lately and I agree.

    OpenAI released WhisperAI as Open Source a little while ago. But the problem with this model (and others such as Stable Diffusion for example) is it is hard to integrate directly into other software.

    Since it is Open Source though, it leaves room for innovation… Along came a developer who reimplemented WhisperAI in C++ to create a library called whisper.cpp. It uses the model data from WhisperAI, but is much easier to integrate into other software.

    In my (limited) testing it works quite well and it most importantly it all runs locally!

  8. The fundamental problem with criticisms of Moore’s Law is that the limfac in information processing has changed over time.

    Limfac? Military and logistics slang for “limiting factor.”

    In the 1970s through early 2000s, the limfac was almost always processing capacity, as inherent in the subject of Moore’s Law: The density of processing-capable components on a single processing package (the “chip”). The reason for this is fairly simple: Contemporaneous processing packages were always physically less capable than the data storage and I/O systems to which they were connected.

    Now, not so much. The processor chips are now considerably ahead of their data streams, whether measure by storage devices or (as is increasingly apparent to anyone dependent upon “cloud-based” anything) communication between the processor and the storage devices. Although broadband communication has gotten (and will continue to get) faster, its rate of increase long ago fell behind; the effective rate of data transfer for a bog-standard broadband connection now is barely twice what it was in 2009, and it’s increasingly difficult to see how that is going to get better in effective deployment. It’s very much the same for local storage: Those 2Tb drives are great, but they’re still communicating through the same SATA interface at a less-than-twice-effective-rate speed as in 2007. (And effective rate of data transfer for M2 drives is of similar scale since 2015.)

    So the purported “loss of validity” of Moore’s Law is largely meaningless anyway — a processor that is waiting for data or choked by its own limited output channels is no more capable of faster output than a slower processor.

Comments are closed.