[A misleading Newsweek piece, “We Can’t Get There From Here” that I will respond to in detail later this week is the inspiration to update this earlier post on the breakthrough myth.]

This post will explain why some sort of massive government Apollo program or Manhattan project to develop new breakthrough technologies is not a priority component of the effort to stabilize at 350 to 450 ppm.

Put more quantitatively, the question is — What are the chances that multiple (4 to 8+) carbon-free technologies that do not exist today can each deliver the equivalent of 350 Gigawatts baseload power (~2.8 billion Megawatt-hours a year) and/or 160 billion gallons of gasoline cost-effectively by 2050? [Note — that is about half of a stabilization wedge.] For the record, the U.S. consumed about 3.7 billion MW-hrs in 2005 and about 140 billion gallons of motor gasoline.

Put that way, the answer to the question is painfully obvious: “two chances — slim and none.” Indeed, I have repeatedly challenged readers and listeners over the years to name even a single technology breakthrough with such an impact in the past three decades, after the huge surge in energy funding that followed the energy shocks of the 1970s. Nobody has ever named a single one that has even come close.

Grist thanks its sponsors. Become one.

Yet somehow the government is not just going to invent one TILT (Terrific Imaginary Low-carbon Technology) in the next few years, we are going to invent several TILTs. Seriously. Hot fusion? No. Cold fusion? As if. Space solar power? Come on, how could that ever compete with solar baseload (aka CSP)? Hydrogen? It ain’t even an energy source, and after billions of dollars of public and private research in the past 15 years — including several years running of being the single biggest focus of the DOE office on climate solutions I once ran — it still has actually no chance whatsoever of delivering a major cost-effective climate solution by midcentury if ever (see “California Hydrogen Highway R.I.P.).

I don’t know why the breakthrough crowd can’t see the obvious — so I will elaborate here. I will also discuss a major study that explains why deployment programs are so much more important than R&D at this point. Let’s keep this simple:

  • To stabilize below 450 ppm, we need to deploy by 2050 some 12 to 14 stabilization wedges (each delivering 1 billion tons of avoided carbon) covering both efficient energy use and carbon-free supply (see here).  The technologies we have today, plus a few that are in the verge of being commercialized, can provide the needed low-carbon energy [see “How the world can stabilize at 350 to 450 ppm: The full global warming solution (updated)“].
  • Myriad energy-efficient are already cost-effective today.  Breaking down the barriers to their deployment now is much, much more important than developing new “breakthrough” efficient TILTs, since those would simply fail in the marketplace because of the same barriers.  Cogeneration is perhaps the clearest example of this.
  • On the supply side, deployment programs (coupled with a price for carbon) will always be much, much more important than R&D programs because new technologies take an incredibly long time to achieve mass-market commercial success. New supply TILTs would not simply emerge at a low cost. They need volume, volume, volume — steady and large increases in demand over time to bring the cost down, as I discuss at length below.
  • No existing or breakthrough technology is going to beat the price of power from a coal plant that has already been built — the only way to deal with those plants is a high price for carbon or a mandate to shut them down. Indeed, that’s why we must act immediately not to build those plants in the first place.
  • If a new supply technology can’t deliver half a wedge, it won’t be a big player in achieving 350-450 ppm.

For better or worse, we are stuck through 2050 with the technologies that are commercial today (like solar thermal electric) or that are very nearly commercial (like plug-in hybrids).

I have discussed most of this at length in previous posts (listed below), so I won’t repeat all the arguments here. Let me just focus on a few key points. A critical historical fact was explained by Royal Dutch/Shell, in their 2001 scenarios for how energy use is likely to evolve over the next five decades (even with a carbon constraint):

Reader support helps sustain our work. Donate today to keep our climate news free. All donations DOUBLED!

Grist thanks its sponsors. Become one.

“Typically it has taken 25 years after commercial introduction for a primary energy form to obtain a 1 percent share of the global market” [PDF]

Note that this tiny toe-hold comes 25 years after commercial introduction. The first transition from scientific breakthrough to commercial introduction may itself take decades. We still haven’t seen commercial introduction of a hydrogen fuel cell car and have barely seen any commercial fuel cells — over 160 years after they were first invented.

This tells you two important things. First, new breakthrough energy technologies simply don’t enter the market fast enough to have a big impact in the time frame we care about. We are trying to get 5% to 10% shares — or more — of the global market for energy, which means massive deployment by 2050 (if not sooner).

Second, if you are in the kind of hurry we are all in, then you are going to have to take unusual measures to deploy technologies far more aggressively than has ever occurred historically. That is, speeding up the deployment side is much more important than generating new technologies. Why? Virtually every supply technology in history has a steadily declining cost curve, whereby greater volume leads to lower cost in a predictable fashion because of economies of scale and the manufacturing learning curve.

WHY DEPLOYMENT NOW COMPLETELY TRUMPS RESEARCH

A major 2000 report by the International Energy Agency, Experience Curves for Energy Technology Policy has a whole bunch of experience curves for various energy technologies. Let me quote some key passages:

Wind power is an example of a technology which relies on technical components that have reached maturity in other technological fields…. Experience curves for the total process of producing electricity from wind are considerably steeper than for wind turbines. Such experience curves reflect the learning in choosing sites for wind power, tailoring the turbines to the site, maintenance, power management, etc, which all are new activities.

Or consider PV:

Existing data show that experience curves provide a rational and systematic methodology to describe the historical development and performance of technologies….

The experience curve shows the investment necessary to make a technology, such as PV, competitive, but it does not forecast when the technology will break-even. The time of break-even depends on deployment rates, which the decision-maker can influence through policy. With historical annual growth rates of 15%, photovoltaic modules will reach break-even point around the year 2025. Doubling the rate of growth will move the break-even point 10 years ahead to 2015.

Investments will be needed for the ride down the experience curve, that is for the learning efforts which will bring prices to the break-even point. An indicator for the resources required for learning is the difference between actual price and break-even price, i.e., the additional costs for the technology compared with the cost of the same service from technologies which the market presently considers cost-efficient. We will refer to these additional costs as learning investments, which means that they are investments in learning to make the technology cost-efficient, after which they will be recovered as the technology continues to improve.

Here is a key conclusion:

for major technologies such as photovoltaics, wind power, biomass, or heat pumps, resources provided through the market dominate the learning investments. Government deployment programmes may still be needed to stimulate these investments. The government expenditures for these programmes will be included in the learning investments.

Obviously government R&D, and especially first-of-a-kind demonstration programs, are critical before the technology can be introduced to the marketplace on a large scale — and I’m glad Obama had doubled spending in this area. But, we “expect learning investments to become the dominant resource for later stages in technology development, where the objectives are to overcome cost barriers and make the technology commercial.”

We are really in a race to get technologies into the learning curve phase: “The experience effect leads to a competition between technologies to take advantage of opportunities for learning provided by the market. To exploit the opportunity, the emerging and still too expensive technology also has to compete for learning investments.”

In short, you need to get from first demonstration to commercial introduction as quickly as possible to be able to then take advantage of the learning curve before your competition does. Again, that’s why if you want mass deployment of the technology by 2050, we are mostly stuck with what we have today or very soon will have. Some breakthrough TILT in the year 2025 will find it exceedingly difficult to compete with technologies like CSP or wind that have had decades of such learning.

And that is why the analogy of a massive government Apollo program or Manhattan project is so flawed. Those programs were to create unique non-commercial products for a specialized customer with an unlimited budget. Throwing money at the problem was an obvious approach. To save a livable climate we need to create mass-market commercial products for lots of different customers who have limited budgets. That requires a completely different strategy.

Finally, it should be obvious (here), but it apparently isn’t, so I’ll repeat:

The risk of climate change, however, poses an externality which might be very substantial and costly to internalise through price alone. Intervening in the market to support a climate-friendly technology that may otherwise risk lock-out may be a legitimate way for the policymaker to manage the externality; the experience effect thus expands his policy options. For example, carbon taxes in different sectors of the economy can activate the learning for climate-friendly technologies by raising the break-even price.

So, yes, a price for carbon is exceedingly important — more important, as I have argued, than funding the search for TILTs.

THE BREAKTHROUGH BUNCH

The NYT’s Revkin (here) interviewed a whole bunch of people who think we need “massive public investments” and breakthroughs. Revkin writes: “Most of these experts also say existing energy alternatives and improvements in energy efficiency are simply not enough.”

The devil is always in the details of the quotes — especially since everybody I know wants more federal investments on low carbon technologies. And, of course, some of the folks Revkin quotes are long time delayers, like W. David Montgomery of Charles River Associates — who has testified many times that taking strong action on climate change would harm the economy. He says stabilizing temperatures by the end of the century “will be an economic impossibility without a major R.& D. investment.” Well, of course he would. In any case, we don’t have until the end of the century — yes, it would certainly be useful to have new technologies in the second half of this century, but the next couple of decades are really going to determine our fate.

Revkin quotes my friend Jae Edmonds as saying we need to find “energy technologies that don’t have a name yet.”  Jae and I have long disagreed on this, and he is wrong. His economic models have tended to assume a few major breakthroughs in a few decades and that’s how he solves the climate problem. Again, I see no evidence that that is a plausible solution nor that we have the time to wait and see.

I would estimate that the actual federal budget today that goes toward R&D breakthroughs that could plausibly deliver a half wedge or more by 2050 (i.e. not fusion, not hydrogen) is probably a few hundred million dollars at most. I wouldn’t mind raising that to a billion dollars a year. But I wouldn’t spend more, especially as long as the money was controlled by a Congress with its counterproductive earmarks. I could probably usefully spend 10 times that on deployment (not counting tax policy), again as long as the money was not controlled by Congress. Since that may be difficult if not impossible to arrange, we have to think hard about what the size of a new federal program might be.

Yet another reason we don’t need sort of massive government Apollo program or Manhattan project is that the venture-capital community has massively ramped up cleantech spending just where it is most needed — key low-carbon technologies and have a serious chance of become commercial in the next 3 to 5 years (see Stimulus and venture capital sow seeds for cleantech industry’s “revival”).

Roger Pielke, Jr., has said (here) that my proposed 14 wedges requires betting the future on “some fantastically delusional expectations of the possibilities of policy implementation” and that my allegedly “fuzzy math explains exactly why innovation must be at the core of any approach to mitigation that has a chance of succeeding.” Well, we’ve seen my math wasn’t fuzzy (here).

But you tell me, what is more delusional — 1) that we take a bunch of commercial or very near commercial technologies and rapidly accelerate their deployment to wedge-scale over the next four decades or 2) that in the same exact time frame, we invent a bunch of completely new technologies “that don’t have a name yet,” commercialize them, and then rapidly accelerate them into the marketplace so they achieve wedge scale?

And so I assert again, the vast majority — if not all — of the wedge-sized solutions for 2050 will come from technologies that are now commercial or very soon will be. And federal policy must be designed with that understanding in mind. So it seems appropriate to end this post with excerpt from the Conclusion of the IEA report:

A general message to policy makers comes from the basic philosophy of the experience curve. Learning requires continuous action, and future opportunities are therefore strongly coupled to present activities. If we want cost-efficient, CO2-mitigation technologies available during the first decades of the new century, these technologies must be given the opportunity to learn in the current marketplace. Deferring decisions on deployment will risk lock-out of these technologies, i.e., lack of opportunities to learn will foreclose these options making them unavailable to the energy system.

… the low-cost path to CO2-stabilisation requires large investments in technology learning over the next decades. The learning investments are provided through market deployment of technologies not yet commercial, in order to reduce the cost of these technologies and make them competitive with conventional fossil-fuel technologies. Governments can use several policy instruments to ensure that market actors make the large-scale learning investments in environment-friendly technologies. Measures to encourage niche markets for new technologies are one of the most efficient ways for governments to provide learning opportunities. The learning investments are recovered as the new technologies mature, illustrating the long-range financing component of cost-efficient policies to reduce CO2 emissions. The time horizon for learning stretches over several decades, which require long-term, stable policies for energy technology.

Deployment, deployment, deployment, R&D, deployment, deployment, deployment.

This post was created for ClimateProgress.org, a project of the Center for American Progress Action Fund.