Speaker
Description
This paper examines the economics of the first generation of nuclear power plants in the United States between the 1950s and 1980s as acknowledged by Böse (In Press) with a focus costs during this formative period. The United States, the largest global producer of nuclear power, faced significant uncertainties in the early years of nuclear energy deployment, making the topic of particular importance. Contrary to initial expectations, the light-water reactor (LWR), despite being viewed as less promising compared to the breeder reactor with a fast neutron spectrum, became the dominant technology in the rollout of nuclear power plants.
We update earlier literature by Baade (1958), Münzinger (1960) and Cohn (1990) and provide a bottom up estimation of the cost of nuclear power. For example, costs of nuclear power plants of the 1950s (e.g. Shippingport and Yankee), of the 1960s (e.g., Oyster Creek and San Onofre-1) and the 1970s (e.g., Palisades) are analyzed. These costs are compared to those of competing energy sources, primarily coal, to assess the economic viability of nuclear power. Our preliminary findings reveal that the levelized costs of electricity (LCOE) for early nuclear plants increased significantly over time and thus not able to compete with coal. This trend contradicted the widespread expectation among utilities, reactor vendors, and the nuclear industry that nuclear power would eventually become more cost-effective.
Our analysis highlights the complexity of nuclear power’s economic performance in its early stages and emphasizes the need for a more nuanced exploration of the interplay between economics, technology, and policy in the development of nuclear energy.