Sustaining vs. Disruptive Innovation: What’s the Difference?

From The Harvard Business School:

Innovation is on the minds of professionals across industries, and rightfully so. Strategizing for innovation can enable businesses to provide customers with continued value, create new market segments, and push competitors out of segments they once owned.

According to a recent McKinsey Global Survey, 84 percent of executives feel innovation is extremely or very important to their companies’ growth strategies. When formulating a business strategy, understanding the different types of innovation can help you conceptualize your business’s place in its industry, identify what your current innovation strategy is and if you’d like to change it, and recognize competitors’ innovation strategies.

These insights can inform innovation strategies that drive purposeful, proactive product decisions to disrupt an industry or avoid being disrupted by another organization.

The two types of innovation are sustaining and disruptive. Here’s a breakdown of each, the key factors that differentiate them, and the importance of incorporating disruptive innovation into your strategic mindset.

What Is Sustaining Innovation?

Sustaining innovation occurs when a company creates better-performing products to sell for higher profits to its best customers. Typically, sustaining innovation is a strategy used by companies already successful in their industries. The motivating factor in sustaining innovation is profit; by creating better products for its best customers, a business can pursue ever-higher profit margins.

One example discussed in the online course Disruptive Strategy is the introduction of laptops in the computing industry. Laptop computers were a sustaining innovation that followed the personal desktop computer. The computers’ qualities and abilities were roughly equal, with the laptop offering novel portability. This leveled-up version of the same product catered to desktop users willing to pay for the increased flexibility the laptop provided.

In a vacuum, relying on sustaining innovation is a sound strategy that involves continually creating better versions of your product to gain higher profit margins from customers who are willing to pay. Yet, some of the most successful companies built on sustaining innovation fail.

“Why is it that good companies run by good, smart people find it so hard to sustain their success?” Harvard Business School Professor Clayton Christensen asks in Disruptive Strategy. “In our research, success is very hard to sustain. The common reason why successful companies fail is this phenomenon we call ‘disruption.’”

. . . .

What Is Disruptive Innovation?

Disruptive innovation—the second type of innovation and the force behind disruption—occurs when a company with fewer resources moves upmarket and challenges an incumbent business. There are two types of disruptive innovation:

  • Low-end disruption, in which a company uses a low-cost business model to enter at the bottom of an existing market and claim a segment
  • New-market disruption, in which a company creates and claims a new segment in an existing market by catering to an underserved customer base

Both types of disruptive innovation cause the incumbent company—which relies on sustaining innovation—to retreat upmarket rather than fight the new entrant. This is because the entrant has selected a segment (either at the bottom of the existing market or a new market segment) in which profit margins are relatively low. The incumbent company’s innovation strategy is driven by higher profit margins, causing them to pull out of the segment in question and focus on those with even higher profit margins.

As the entrant’s product offerings improve, it moves into segments with those higher profit margins. Once again, the incumbent company is motivated to retreat upmarket rather than fight for the lower-profit market segments.

Eventually, the entrant pushes the incumbent out of the market altogether, having improved its product so much that it claims all existing market segments or renders the incumbent’s products obsolete.

Returning to the example of the computing industry, the introduction of smartphones was a disruptive innovation, specifically new-market disruption. Smartphones catered to a new market segment of customers who didn’t need the level of capabilities offered by a laptop—basic, convenient internet access at a fraction of the cost of a desktop or laptop computer was enough. As the quality of smartphones improves, the laptop and desktop may be pushed further upmarket and, eventually, into obsolescence.

Link to the rest at The Harvard Business School

PG has been interested in disruptive technology/innovation ever since he first heard the term a very long time ago.

The term disruptive technology was originally coined by Harvard Business School professor Clayton Christensen in 1995 and further expounded in his book The Innovator’s Dilemma in 1997. In the follow-up work, The Innovator’s Solution, he replaced the term with disruptive innovation.

12 thoughts on “Sustaining vs. Disruptive Innovation: What’s the Difference?”

  1. “Disruptive innovation” is also a pretty obvious subtype of Schumpeter’s (well, technically Sombart’s) “creative destruction.” Schumpeter was focused far more on the comparative-advantage changes, but as many (not all) technological changes also implicate aspects of comparative advantage it’s not really that far off.

  2. “As the quality of smartphones improves, the laptop and desktop may be pushed further upmarket and, eventually, into obsolescence.”

    No.
    Human factors.

    Even foldable phones can’t provide a proper input scheme to match keyboard+mouse.
    Even laptops now support triple monitor displays and some monster desktops are *productively* use six/nine displays. Phones can at most support one.

    Unless “eventually” = generations, smartphones aren’t replacing PCs during the lifetime of anybody now loving. In case you haven’t noticed, mainframes are still alive and well; what is “cloud computing” but timeshared mainframes? AWS, AZURE, even Google Cloud are *growing* businesses. And the current “AI” wave of software only works on or with the assistance of, specialized datacenter computers that are but a further evolution of the centralized computing glasshouses of old. For the next few years, “AI” apps are going to boost demand for this breed of computing hardware. Along with its kin, cloud gaming, built on similar computing hardware. It turns out that the “plumbing” needed to render all those modern photorealistic or Pixar grade game graphics, locally or remotely, are useful for running “AI” applications. Useful, but not sufficient. The core hardware developers of PC components (CPUs and GPUs) are preparing the next evolution of PCs, personal computers capable of running ‘AI” apps locally, with minimal (if any) remote resources. After all, who is going to willingly going to upload their latest WIP to some remote system for advanced (spelling/grammar/consistency) checking (for a fee) when their local laptop/mini desktop can do that work locally on their Windows 12 2025-vintage $500 laptop?

    The death of the PC has been greatly exagerated for over a decade now. Mostly because of the usual economic illiteracy about tech adoption curves and the beancounters mishabit of tracking everything by consumer spending rather than unit sales. As useful PC tech becomes cheaper, more and more people can meet their computing needs with $300-500 laptops instead of the classic $1000-2000 systems. Those still exist but to meet the needs of high end users, not the mainstream personal computer user.

    What smartphones, tablet, internet speakers, and smart TVs do is address (and grow) the separate market for digital communication and consumption services. Yes, those were carried out via PC a generstion ago,but only because the dedicated consumer devices didn’t exist at economically viable prices. (Anybody remember the SONY eVILLA and its kin, the internet access gadgets?)

    PCs did not kill mainframes, they mostly addressed a crying need for locally controlled computing, and Smartphones aren’t killing PCs; they merely address a market that is better served by a device built around a useful subset of PC capabilities, no different from gaming consoles, which have been with us for over 50 years now. (Really. Pong came out in 1972. So did the Magnavox Odyssey. Time flies.)

    Tech disruptions are important inflexion points that are easy to identify, albeit in hindsight, but tech evolution in properly run industries is at least as significant but harder to track until its accumulated incremental changes hit us in tbe face with a “say what?” moment.

    There’s a couple out there right now and a few more coming in at least three sectors.
    (Four, really, but one can hope not to see the fourth in action.)

    Future shock is still real.
    You have been warned. 😉

  3. Laptops/tablets are unlikely to be supplanted by phones. Two words explain why:
    Aging Users
    Phones are – despite their increasing size, nearing the size of small tablets – too d*** small to read. As the now-young age, they will hit the wall of being able to easily see the small screens (although, the ability to project those screens into air may well offset that deficit). Even innovative surgeries to correct vision will not fully offset the decreasing acuity that accompanies aging.
    Another factor is the size of the keyboards/onboard keyboard screen. The limiting factor is the flexibility of aging fingers, and – combined with decreasing vision – the ability to hit the right keys.
    True, voice commands have some ability to replace that input means, but – and I’ve tried using voice commands for some time, due to arthritis issues – the vocal apparatus also deteriorates with age. Even younger users struggle with getting their commands recognized; add to that competition from noisy environment, non-native speakers of languages, and accents/dialects, and the situation is unlikely to be completely resolved in the near future.

    • We may see the same innards in several different sizes of devices. If users are involved, they will gravitate to the most user friendly experience. Anyone interested in writing a program on a phone screen?

        • Not sure how that would work for those of us who look at the keyboard to guide our two fingers. I’ve never tried any of the goggles or glasses.

          • Not well.
            Virtual displays come in two flavors: closed, like in VR googles, and open, as in the linked AR glasses. (With AR=augmented reality.) Some closed displays like Apple’s fake AR headset add cameras to the headset to display an (optional) view of the world before the user because it is easier/cheaper than a true transparent display.

            However…
            All virtual displays suffer the same limitation: line of sight.
            With VR the solution to input is on screen keyboards and either eyetracking or handhold controllers. We’ve all seen those on phonrs and tablets.

            AR can do the same or move the displayed content up the field of view as the head moves down. No different than a real world display. Which means no real benefit except privacy. Useful on planes, I suppose.

            So far, VR is a solution in search of a problem and AR is useful and promising but a pricey niche, not ready for broad deployment. Neither is going to disrupt anything except the budgets of Sony and Facebook’s META. And not in a good way.

      • Did you use the Kindle internet reading app or download the Kindle app?
        I tried the former ages ago (WORD, too). Not needed but useful to know there is a backup.

Comments are closed.