The cost of electricity has become daily news fodder and looks like it will be for some time to come.
In this article I want to focus on the costs of generating electricity from different sources and try to unpick a few fallacies and misuses of measures commonly used to represent these costs.
Supply = Demand
The first thing to remember is that unlike many goods and most physical commodities, electrical energy is not produced in the form of “widgets” which can be separately produced, delivered, and used at different times and different rates. Essentially in any large electricity system all three actions – production, delivery, and usage – happen instantaneously and at the same rate (in aggregate). If they don’t – the lights go out.
Even where a system has the capability to store energy in the form of pumped hydro or batteries, to pick a couple of topical technologies, electrical energy first gets converted to another form – gravitational potential by pumping water uphill, or chemical energy by recharging a battery – and then reconverted to electrical energy when it’s used. So as far as electricity itself is concerned, production delivery and usage still happen all at once.
Because its production and usage are inextricably tied together, we can’t just make electricity in the most “cost-efficient” way at times that best suit production – whether that’s from “baseload” power stations running constantly 24/7 (which of course they don’t – there’s maintenance, and also “unplanned outages” – i.e. breakdowns), or from windfarms and solar panels as and when wind and sunshine allow – and then use what’s previously been made when we need to. The rate at which we make it must exactly match the rate at which we use it, second to second.
And as everyone knows we don’t use electricity anything like uniformly:
This has very important implications for working out the real costs of electricity production.
LCOE and its Assumptions
Unfortunately the most common metric used to compare costs of different forms of generation – and this applies in industry literature, not just the popular press – completely skips over this fundamental truth.
This measure goes by various names but often it’s called the Long-run (or “Levellised”) Cost Of Electricity – “LCOE”. In principle, it’s a simple yardstick that attempts to combine the various types and levels of cost involved in generating power into a single dollar figure, and divide this figure by a quantity of electricity generated to produce a per-unit cost of electrical energy, usually expressed in units of dollars per megawatt hour ($/MWh).
As the “Long-run” part of the name suggests, a feature of this metric is an attempt to attribute or spread the large up-front investment, or capital costs of power generation technologies to amounts of energy produced, across long plant lifetimes – decades in most cases.
In doing this, as well as capturing the different types of cost involved, the calculations behind the measure rely on some critical assumptions about the rate of return on investment (or discount rate in financial engineering terms), and capacity utilization, i.e. energy production relative to maximum capacity, for different technologies.
What LCOE measures usually do is express all costs of building owning and running a power station of a given megawatt capacity in dollars per year (using the rate of return to convert upfront capital costs to an annual equivalent), and divide this by output in megawatt hours per year – which comes from an assumption about capacity utilization.
Because a large proportion of the total costs of building owning and operating any form of generation are fixed, the per-unit cost of production varies strongly and inversely with this assumed capacity utilization, as illustrated on the chart below:
And this is the root of a key problem with LCOE in the way we often see it employed. Because electricity has to be made as it is used, we simply can’t assume some particular fixed capacity utilization factor for a specific technology or power station to get an output figure, divide that into costs for that same time period, and arrive at a number that can be blithely thrown about as a meaningful comparator between technologies, or that yields much insight into that technology’s impact on the overall cost of production in a real-world electricity system.
The need to generate electricity only as it used, and the widely varying rate of that usage mean that regardless of technology, the capacity utilization of generators fluctuates enormously across the fleet at any one time (so at any point in time some are running at maximum capacity and others not at all), and utilization for any given generator also varies across time.
Any real-world fleet will comprise some generators which usually run at high utilization and others which run at relatively low and quite variable utilization rates. The chart below shows the range of capacity utilization rates (monthly and annual) applying to the Queensland generation fleet in 2016. Simple cost of production measures like LCOE that ignore this reality by assuming a single fixed capacity utilization can therefore be highly misleading.
A related problem with LCOE measures in application to real world systems is that if we add a new power station to an existing system, this will reduce the capacity factors of some or all of the other power stations in the fleet (because total production must still equal demand) and therefore increase their unit costs of production.
So drawing conclusions about impacts on system average costs on the basis of LCOE measures is fraught.
And impacts on market prices will be different again because market prices tend to reflect short run variable costs and overall supply demand balance rather than total costs.
A third justified criticism of LCOE focuses on the issue of “dispatchability”. This criticism points out that because of the need to continuously balance supply and demand there is a difference in value to an electricity system of generation sources whose output can be called up and controlled (“dispatched”) anywhere between some minimum value (ideally zero) and maximum capacity, versus those whose output is largely determined by non-controllable factors like wind and sunshine. Simple LCOE measures totally ignore this complication.
There are sometimes extensions or modifications of the LCOE concept proposed to factor in “firming costs” for non-dispatchable technologies. This might involve adding in costs for a storage capability attached to a non-dispatchable source, or extra costs for a certain “matched” quantity of dispatchable generating capacity to supplement variable output from the non-dispatchable source. Finkel had a go at this:
But these approaches suffer from the same fundamental problem as LCOE – fixed and somewhat arbitrary assumptions (and more of them) about amounts of and utilization rates for the supplementing technologies, and for the overall utilization of the combination in a real-world system. These attempts may be directionally useful in at least highlighting that costs for dispatchable and non-dispatchable sources are not directly comparable, but they are necessarily imprecise (despite charts like the one above) and really do little to resolve which technology is “cheaper” – something LCOE is not much good for even when comparing one form of conventional dispatchable technology with another.
Here it’s worth reiterating that assumptions that dispatchable “baseload” generation technology like coal or combined-cycle gas plant can simply run at a preferred high capacity factor at all times and never break down – implicit in LCOE measures and absence of any “firming costs” for these technologies – are also inappropriate. In an all-coal or all-gas or all-anything system, relatively few generators would be able to run this way.
Why Bother with LCOE?
So what is LCOE good for? Predominantly for assessing trends in generation costs for a given technology type over time – and only for comparison between technologies where their cost structures and technical characteristics happen to be very similar, meaning that they will play similar roles in a real world electricity system. So pointing out a falling trend over time in the LCOE of solar PV technologies is a very valid use of the measure. Comparing the LCOEs of (say) sub-critical and ultrasupercritical coal, or even the LCOEs of coal with and without carbon capture and storage, is also generally valid (although there are even traps in this once we explicitly factor in a price on carbon emissions).
LCOE can also be a useful metric if you’re a project developer or asset owner fortunate enough to find a buyer for your power station’s output who’s willing to take on the risks of variable capacity utilization and market prices, and offer you a fixed price Power Purchase Agreement (PPA) for a relatively predictable quantity of production. In this case comparing the price struck in the PPA with the LCOE of your power station is a valid indicator of profitability.
But note that here it’s the buyer who takes on the risks that the value of that electricity to the system is markedly different from the PPA price or LCOE.
After all that bagging of that particular measure, what are the alternatives? This will have to wait for another article, but there certainly are better ways to evaluate the relative economic value of different generation technologies, as well as demand-side measures. And they all start with the recognition, unlike LCOE measures, that the costs of meeting electricity demand have to be evaluated on a system-wide basis, not by focussing simply on individual technologies or power stations.