Solar Innovations 2020

New solar energy innovations are being unveiled at Intersolar 2020 in San Diego this week, including the California launch of a concrete solar shingle, a unique under-the-panel battery storage configuration, and a single-axis tracker that can accommodate a 10% grade on undulating sites. And one forward-looking company is now buying up broken solar panels in expectation of mining the components for recycling.

The parade of international technology at the ISNA2020 show continues to demonstrate that the solar industry is getting technologically smarter while it thankfully gets cheaper. This is particularly good for both residential and commercial solar installs since utility-scale solar seems to have bottomed out with low-ball, long-term power purchase agreements.

An Integrated Concrete Solar Shingle Comes to California

One standout new offering at the trade show is the Ergosun solar shingle, which looks much like a slate roof tile, but can gather both direct and low-light sun rays. The waterproof concrete base is lapped to be waterproof and is sturdy enough to withstand major snow loads, like those in Norway, where the company recently performed its first install, according to Bruce Wintemute, Solarmass Energy’s chief operations officer.

The solar shingles generate 15 Watts per tile, which is a 60% gain in yield compared to a standard silicon wafer solar panel taking up the same square footage of space, the company claims. It features a patented two-piece junction box and comes in different colours.

The shingles are manufactured both in Canada and China for the US market and carry a warranty of 80% of peak power after 25 years. The Ergosun Integrated Solar Roof Tile was engineered in the UK and now generates power on homes in Canada, the United Kingdom, Sweden, South Africa, and Jamaica.

Yotta Energy Unveils Under-The-Panel Battery Storage

Startup Yotta Energy unveiled its SolarLeaf, a battery storage system located under a traditional solar panel, a modular Direct Current-based storage solution with smart passive thermal regulation to protect the batteries from high heat. The battery chemistry is based on the lithium-iron-phosphate solution that has extremely low chances of fire risk, and does not contain cobalt or magnesium that is present in other battery chemistries, notes Sean Walters, the director of business development for the company.

The SolarLeaf includes a built-in DC optimizer with wireless monitoring to manage both solar power generation and energy storage. The DC coupling means that no energy loss takes place as in systems where the DC current is converted to AC for household use, and then reconverted to DC to charge batteries, which involves an energy conversion efficiency loss of several percents in each stage. Since the battery is located under the panel, rather than being housed in a cabinet that might take up critical ground or wall space, it offers a unique solution for applications like carports, where battery cages on the ground could be bumper bait.

The Sunflower Solar Tracker Rides the Hills

Another innovation at the show is the addition of a ballast-mounted version of the Sunflower single axis tracker from RBI Solar, which can be installed on an undulating 10% grade. The tracker was launched just one year ago, but garnered 500 megawatts of installation during 2019, says Kevin Ward, the marketing manager of the company.

The precast concrete ballast version of the Sunflower permits installation over landfills, culturally sensitive sites, and other locations where ground penetration is either not desirable, or not permitted. The linkage of the patented gearbox for the tracker is positioned such that weight is carried by the post, rather than on the gear, which helps prevent the wind-induced torque that is referred to as “galloping” in the industry.

One advantage of the steep slope climbing capability of the tracker is that ground preparation costs are largely eliminated, opening up the geographic market for the technology to locations that previously would not have been considered suitable for a tracker system. The centralized tracker system accommodates up to 120 modules per row.

WeRecycleSolar Harvests Dead Solar Panels

A little-known fact in the solar industry is that the metallic and chemical components of solar panels would be considered hazardous, if the public were exposed to the compounds, points out Dwight Clark, the chief compliance officer for WeRecycleSolar, which is actively buying decommissioned solar panels at the show.

“About half of the state environmental agencies would classify solar panels as hazardous waste,” Clark says. “So we have developed a process to separate out the 98% of the components by weight and to provide them to the international market for commodities,” he says. WeRecycleSolar has just completed Phase One of its process testing, and hopes to begin commercial operation within four months. The limiting factor will be having enough panels on hand — or about 100 tons — to justify the recovery line, he says. One solar panel weighs roughly 50 pounds. The company has found a way to recover all the standard components of a solar panel except the plastic back sheet

Source: CleanTechnica

Solar Panels Dessert

In 1989, pro-nuclear lobbyists claimed that wind power couldn’t even provide 1% of Germany’s electricity. A few years later, pro-nuclear lobbyists ran ads in German newspapers, claiming that renewables wouldn’t be able to meet 4% of German electricity demand.

After the renewable energy revolution took off, in 2015, the pro-nuclear power “Breakthrough Institute” published an article claiming solar would be limited to 10–20% and wind to 25–35% of a power system’s electricity.

In 2017, German (pro-nuclear power) economist Hans-Werner Sinn tweeted that more than 50% wind and solar would hardly be possible. And in 2018, Carnegie Science reported a study claiming that “wind and solar could meet most but not all U.S. electricity needs.” According to one of the authors, their research indicates that “huge amounts of storage” or natural gas would need to supplement solar and wind power.

From a pro-renewable perspective, this is encouraging. The claims about the limits of renewable energy have moved from “not even 1% of electricity” to “most but not all of the electricity.” And yet, the anti-renewables message has always been the same: renewables will lead to a dead end.

In order to underscore their point, anti-renewable energy propagandists now publish incorrect cost figures that claim a fully renewable electric grid would be unaffordable or way more expensive than other options, such as, you guessed it, nuclear power.

MIT Technology Review writes about the “scary price tag” that such a purely renewable grid would come with, calculating $2.5 trillion as a price tag for storage requirements alone — 12 hours of storage. Wood McKenzie also talks about $2.5 trillion, albeit for 24 hours of storage. The “Clean Air task force” puts the cost for a 100% renewable grid in California at an annual $350 billion.

Anti-renewable propagandists need to talk about imaginary high costs of renewables, especially because one of their preferred ways of generating electricity — nuclear power — turns out to be incredibly expensive.

Renewable energy gets cheaper each year, nuclear power gets more expensive each year — how come they still adamantly claim that renewables are not a cost-effective way of decarbonizing?

The answer, of course, is that the studies are flawed. Taking a look at these studies shows that several patterns can be observed in many of these studies. Among these flaws are ridiculous overestimates of storage requirements, overestimates of grid expansion needs, and the insistence on uneconomical strategies of storing electricity, such as insisting on batteries to store several weeks worth of grid electricity consumption.

In order to understand how these studies are flawed, it’s essential to understand how a renewable energy grid actually works, how energy storage works, and what costs you can expect. After that, I will describe the flaws in some of these studies and recalculate a more realistic scenario, especially more realistic cost projections.

How a renewable grid works

A few facts are important to know:

Storage will not be necessary for a long time.

The sun doesn’t always shine, the wind doesn’t always blow — yet most of the time, there is either sun or wind available. For now, storage will not play a role for a long time. Solar and wind power will increase their shares of electricity consumption, and until they reach 80% of electricity consumption, grid expansion, moderate curtailment, and gas-fired backup power plants are the only tools necessary to reach such a high share of renewables.

Backup power plants are cheap.

So, if 80% of the electricity is generated using solar and wind power, the remaining 20% has to be created from backup power plants. According to grid operator PJM’s data, backup power plants cost up to $120,200 per megawatt per year. We can calculate the cost for a worst case scenario: To cover the 769 gigawatts of US peak load, backup power plants would cost $92.5 billion per year. Divided by the 4.18 trillion kilowatt-hours that were consumed in the USA in 2018, that amounts to 2.2 cents per kilowatt-hour.

Nuclear power is expensive and gets more expensive over time.

The newest Lazard figures put nuclear power at 15 cents per kilowatt-hour. In addition, that’s more than the cost figures of the previous years.

Even for 80 percent solar and wind, grid investment costs are moderate.

The NREL estimates that, even if you get 77% of electricity from solar and wind power, the grid will have to be expanded from around 85,000 gigawatt-miles to around 116,000 gigawatt-miles. That’s not even a 50% increase.

Getting more solar and wind power will require overbuilding and curtailment.

One study that is often cited as “proof” of the limits to renewables finds that, actually, even without any storage, overbuilding solar and wind to 1.5 times US consumption could get you 93% solar and wind power in the grid. This is still without any storage at all. To put this into perspective, if you overbuild solar and wind power 1.5 times, and you have an LCOE of 3 cents per kWh (according to BNEF, this is possible for solar and wind by 2030), that gives you a total LCOE of 4.5 cents per kWh (ignoring minor system costs for curtailment), which is still very cheap, and far below the 15 cents per kWh figure for nuclear power.

The remaining 7% could be provided, for example, by burning synthetic methane that’s made from hydrogen and carbon dioxide.

You can make a synthetic gas that’s 100% compatible with the existing gas infrastructure. The process is known as power-to-gas. Electrolysis uses solar and wind electricity to split water into hydrogen and oxygen. In a second step, carbon dioxide, which can be captured from the air (direct air capture) is mixed with the hydrogen. This results in methane, which is 100% compatible with the existing gas grid and the gas-fired power plants. Once this methane is burned, it emits only as much carbon dioxide as was previously captured from the air. The cost for this methane is currently estimated at 20 euro-cents per kWh, but costs have come down in the past and will continue to come down. In Germany, there is already a facility that generates renewable methane and injects it into the gas grid.

There might be other storage options as well in the future.

To store the entire grid for many hours or even days, batteries are too expensive. Yet there are other options under investigation. Siemens is testing a simple concept of first converting the electricity into heat, storing the heat, and later using that heat to drive a steam turbine. Highview Power uses cold air to store electricity and use the expanding, reheating air to drive a turbine. Both companies already built a pilot storage plant.

Considering these facts, it is possible to make a calculation about how much a purely renewable grid would likely cost, using today’s technology and today’s prices. Whenever anyone claims way higher costs, we should grow suspicious immediately.

Calculating the cost for a purely renewable grid.

Assuming we used today’s technology, we can compare solar and wind power to nuclear power. According to Lazard, nuclear power costs 15 cent per kWh. Generating all of US electricity from nuclear power, therefore, would cost $615 billion per year. So, how much would a completely renewable grid cost — per year and per kilowatt-hour?

One way a renewable grid would work would include the following technologies
Expanding solar and wind power to reach 93 percent wind/solar.

Using the study “geophysical constraints on the reliability of wind and solar power,” getting to 93 percent solar and wind power would require generating 1.5 times US power demand. This means that you overbuild wind and solar and curtail some of the electricity to increase the amount of solar/wind power that can be used directly. You would have to generate 6300 TWh of renewable electricity, which at current costs (according to Lazard) would cost $271 billion per year.

Paying for backup power plants.

Backup power plants that could provide the entire grid with electricity would cost $92.5 billion per year, according to PJM data.

Expand the grid

NREL data suggests that you need +30 TW-miles to go to 80 percent renewables. Extrapolating, you would need +37.5 TW-miles for 100 percent. That’s around 60 TW-km — thus, around $60 billion grid investment. Calculating the grid investment cost per year, it would cost around $10 billion a year (WACC 10%, 10 year payment). This shows that grid expenditure is negligible.

Burning renewable methane in these backup power plants to reach 100 percent renewable electricity

Using the latest study by Ludwig Bölkow Systemtechnik, generating synthetic natural gas from hydrogen, using direct air capture for the carbon dioxide, 1 kWh of synthetic methane costs around 20 euro-cents per kWh when produced in Europe. In a 60 percent efficient CCGT power plant, 1 kWh would cost 33.33 euro-cents (37.15 US cents). Generating 7 percent of US electricity from renewable synthetic methane costs $110 billion.

Total cost, therefore, would amount to $483.5 billion per year. Divided by electricity consumption of 4100 TWh, the total cost would be 11.8 cents per kilowatt hour. This is already cheaper than Lazard’s estimate for nuclear power, which is currently at 15 cents per kilowatt-hour.

Let’s also stress that this will change. In 2030, according to BNEF, wind and solar power will already be below $30 per MWh. Synthetic methane will cost around 15 euro-cents per kWh, according to LBST. As such, you would annually spend $189 billion on wind/solar electricity, plus 27,86*294= $82bn on synthetic methane, $92.5 bn on CCGT power plants, and $10 bn on grid expansion leading to a total $9.1 cents per kWh. That’s way cheaper than nuclear power.

So, how come we keep reading that a fully renewable electricity grid would be astronomically expensive, especially from pro-nuclear lobbyists? If a quick and dirty calculation already shows that renewable electricity is already cheaper than nuclear power, how come numerous studies point to 100 percent renewable electricity being unaffordable?

Once you understand how a renewable grid works and how much it will likely cost, we can look at the strategies used to discredit renewable energy.

Let’s look at the studies.

One of the studies frequently quoted by MIT Technology Review is the study “Geophysical constraints on the reliability of solar and wind power in the United States.” It’s available on the Internet for free, and a seemingly serious attempt to calculate scenarios of reaching 100 percent renewable electricity. Using 36 years of weather data and comparing it to US electricity demand, the study finds that:

  • 80 percent of the US electricity could be provided by wind and solar power if either
  • 12 hours of storage were installed or
  • there were a continental-scale transmission grid.

To achieve 100 percent solar/wind power, either “several weeks worth of electricity storage” and/or “the installation of much more capacity of solar and wind power than is routinely necessary to meet peak demand” would be required. The availability of “relatively low cost, dispatchable, low CO2 emission power” would obviate the need for extra solar/wind and/or energy storage.

So far, that’s nothing new.

This study, however, goes on by calculating the cost of various scenarios of going to 100 percent renewable energy. However, none of the scenarios considered is even remotely as economical and/or realistic as a solar/wind/backup power plants/power-to-gas scenario. Instead, the study only considers 3 options, which are:

  • overbuilding (no storage)
  • pumped hydro storage
  • battery storage.

There is no precise data on the annual costs for these options, yet it is mentioned that the costs would be $2.7 trillion, the assumed battery life would be 10 years, and the assumed discount rate would be 10 percent — which implies annual costs of $440 billion.

No reason is given why power-to-gas would be completely ignored, at a time in which it was already considered a required future technology to reach 100 percent renewables in Germany. Even precise cost data was already published in Germany (Potenzialatlas Power to Gas). Compared to today, power-to-gas was significantly more expensive at the time the study was published (and so were backup power plants to burn that gas), yet the total costs of storing electricity would have been significantly cheaper.

To get 93 percent solar/wind without storage, generating 1.5 times demand (6000 TWh) would be necessary — at that time around $270 billion. Power-to-gas (synthetic methane) to cover for the remaining 7% would have cost $185 billion, gas-fired power plants would have cost $150 billion. Total cost would have been around $600 billion. This is roughly on par with what nuclear power costs today.

To use batteries, $430 billion would have been necessary for storage alone, in addition, you would have had to generate 8000 TWhs of electricity, leading to a cost of $790 billion. This is equivalent to almost 20 cents per kilowatt-hour in cost.

Therefore, that study calculates a scenario which generates around $190 billion a year in unnecessary costs. In addition, that scenario today is outdated. As already calculated, today’s technology would lead to an annual cost of $483.5 billion. The Caldeira study, therefore, calculates a scenario that is $300 billion per year too expensive. The study is outdated, assumes the use of inadequate technology and therefore shouldn’t be of any relevance any more.

The Clean Air Task Force Study for California

In case you thought a study like the Caldeira study was highly misleading, you haven’t seen the CATF study for California. As expected, this study was reported on by MIT Technology Review as well.

The study assumes that for 100 percent renewable electricity, California alone would have to pay an annual $350 billion for storage alone. This is akin to $1.6 per kilowatt hour. As expected, that study is complete nonsense, but how on earth are such insane figures even calculated and argued for?

The most likely explanation is that this study completely ignores the possibility of overbuilding and curtailment. This is especially problematic in California, because both wind and solar power plants produce less electricity in winter. The most obvious approach to address that problem would be to build enough wind and solar power plants to provide enough electricity in winter. In summer, excess electricity generation would have to be curtailed.

Instead of this obvious approach, it appears that the Clean Air Task Force assumes that California will build giant batteries that can store all excess electricity in summer to save it for winter. Such an approach is completely absurd, as is demonstrated by the price tag of $350 billion for California alone.

Using up-to-date figures we can estimate the actual cost for California. To reach 100 percent renewable energy using solar, wind, and power-to-gas, we can estimate a total cost of $42.4 billion a year. This is akin to an LCOE of 18.4 cents, using current technology. This is still rather expensive, but not much more expensive than nuclear power. Considering the rapid cost declines for solar and wind power, it can be assumed that solar, wind, and power-to-gas will turn out to be the more economical solution for California as well.

The Hans-Werner-Sinn study for Germany

A similar study was already published in Germany, again assuming one scenario in which curtailment was not allowed. So, again, you had to store huge amounts of electricity in summer to save it for winter — 16 TWh of storage altogether to reach 89 percent solar and wind power. The second scenario didn’t allow for storage at all, which made a massive overcapacity necessary. Therefore, 61 percent of wind and solar power would have to be curtailed to reach 89 percent solar and wind power. There already is a rebuttal to that study, published by Zerrahn, Schill, and Kemfert, that showed how a compromise (allowing for 22 percent curtailment) would reduce storage needs to 1 TWh, whereas allowing for 32 percent curtailment would furthermore reduce storage needs to 432 GWh.

The Wood MacKenzie Study

Wood MacKenzie published a white paper, Deep Decarbonization requires deep pockets, estimating capital investment costs of $4.5 trillion for decarbonization using wind, solar, and batteries alone.

The Wood MacKenzie assumptions are the following:

  • 1,600 gigawatts of generation (wind and solar)
  • 24 hours of lithium-ion battery storage
  • 200,000 miles of new high-voltage transmission at overall $700 billion in cost.

Wood MacKenzie’s assumptions are partly in contradiction to the “geophysical constraints” study. It suggests increasing solar and wind power roughly 12.3 fold, which means that there would be no overbuilding at all.

There is little indication that this would suffice to get 100 percent of solar and wind, even if you had 24 hours of battery storage (unlike 12 hours as suggested by the Caldeira study). In fact, the supplementary data provided by Caldeira shows that increasing storage capacity from 12 hours to 24 hours would have little effect on the necessity to overbuild solar and wind power plants. Since battery storage is incredibly expensive, Wood MacKenzie suggests using:

  • an inadequate storage strategy
  • unnecessarily much storage
  • likely too little solar and wind power to actually achieve 100 percent solar and wind.

Even less justifiable is the assumption that $700 billion would have to be invested in grid expansion. Based on NREL data, it’s likely that less than one tenth of that sum needs to be invested. Even the Caldeira study “only” talks about $410 billion of grid investment.

The Jenkins–Thernstrom commentary

Jenkins, former Director for Energy and Climate Policy in the Breakthrough Institute, published one study and one commentary in Joule Magazine, which of course found that a purely wind-solar-storage solution is not a good idea. Jenkins co-authored one study and one commentary on the future of electric grid decarbonization. The study was published in November 2018, the commentary in December 2018.

The commentary points out challenges on the path to a zero emissions grid. It correctly finds that the challenges increase as renewable penetration increases. It also correctly finds that grid expansion cost are negligible compared to other costs and that greening the electricity sector is vital to green the economy.

It correctly finds that there is a necessity to overbuild. However, it finds that between 40 and 50 percent of generated electricity would have to be curtailed and finds that this would almost double the costs of the entire electricity system. This is, of course, completely outdated, since electricity from solar and wind power have fallen drastically in costs.

The study specifically mentions a possible electricity consumption increase for electricity “and fuels produced from electricity, e.g. hydrogen,” to more than 50 percent of final energy demand.

However, oddly, the study completely ignores the possibility of using exactly these fuels to green the electric grid. Producing electrolytic hydrogen and converting it to methane is not considered, arguing that “considerable uncertainty remains about the real-world cost, timing, and scalability of these storage options.” This technology (power-to-gas), which significantly reduces the costs of greening the electric grid, is completely dismissed.

There is no clear definition of “considerable uncertainty,” and Jenkins, Luke, and Thernstrom don’t mention any specifics or any studies that point to that. In fact, in 2018, various German studies (such as the DENA e-fuels study) already were very specific about the cost (and also predicted a significant cost reduction). No reason is given why that data would be completely ignored.

The commentary goes on arguing that several technologies (grid expansion, flexible demand, seasonal storage, and very-low-cost wind and solar) must all become reality, whereas other technologies such as nuclear power, CCS and enhanced geothermal energy could all fill the firm role in a low-cost, low carbon portfolio. Therefore, the commentary argues, the chances of wind, solar and storage providing 100 percent of electricity consumption are lower than the chances of wind, solar plus nuclear, CCS, or geothermal energy.

This logic has a severe flaw. First of all, very-low-cost wind and solar are very likely to become reality and partly already are reality. Just because several conditions have to be met in one scenario doesn’t mean that this scenario is less likely to work out. Jenkins writes about nuclear power, CCS, bioenergy, and enhanced geothermal energy: “Assume that each resource has only a 50 percent probability of becoming affordable and scalable within the next two decades. If all four options are pursued, however, the odds that at least one succeeds would be 94 percent.”

But you cannot do that. You cannot simply assume a certain chance. Jenkins says that these examples are “purely illustrative,” but still goes on arguing that we shouldn’t eschew the development of firm low-carbon technologies because they face challenges today.

But that’s not how it works.

To make wind and solar power cheap, to make batteries cheap, hundreds of billions of dollars had to be invested. We don’t have an infinite amount of money and an infinite amount of time. Should we invest hundreds of billions of dollars in nuclear power, CCS, and geothermal each? This is money that we couldn’t use for making wind and solar power and energy storage — all of which are proven and highly developed technologies —even cheaper. The more time and money we waste on technologies that face severe problems and are expensive, the less time and money we can use for solar, wind, and energy storage — technologies that actually work.

The Jenkins–Sepulveda–Sisternes–Lester study

Again, this study points out a barely new “finding” that a grid that merely consists of batteries, solar, and wind power is likely going to cost more than other alternatives. This is well known. This is exactly why there is investment in power-to-gas and other long-term storage technologies — for example, thermal energy storage.

Of course, again, power-to-gas is ignored entirely, therefore leaving wind-solar and storage with the only storage option of lithium-ion batteries.

What’s more worrisome about this study is the fact that the authors “propose a new taxonomy that divides low-carbon electricity technologies into three different sub-categories: ‘Fuel-saving’ variable renewables (such as solar and wind), ‘Fast burst’ balancing renewables (such as lithium-ion batteries), and ‘firm’ low carbon resources such as nuclear power plants and carbon capture and storage (CCS) power plants.”

This is a very dangerous taxonomy. If we start using it, we implicitly rule out that solar, wind, and some sort of energy storage can power the grid alone. Solar and wind power will always merely be considered an add-on to a grid that is essentially powered by some other resource.

Of course, power-to-gas could be considered a “firm” energy source. However, there is a significant difference between carbon capture and storage (CSS) and nuclear power: capital costs. Equipping a gas-fired power plant with carbon capture features would double the capital costs, which reduces its economical prospects if it isn’t used frequently. Nuclear power is even more capital-intensive and would have to be used frequently as well.

This is also confirmed by what the authors envision: What’s officially named “mid-range scenario” (presumably the most likely outcome, according to the authors) not only indicates that nuclear power will be the most important electricity source — providing around 50 percent of all electricity in the “Southern System” and around 80 percent electricity in the “Northern System.” Jenkins basically did it again: Limit wind and solar power to a maximum of around 50 percent and declare that the most important electricity source in the future will be — you guessed it — nuclear power.

However, looking at the study, you will immediately find significant flaws.

The first obvious flaw, of course, is that power-to-gas is completely ignored. This was expected.

A little less expected are the assumptions for technology costs.

For example, the mid-range costs for solar power are considered to be $900 per kilowatt. This is based on the NREL data for 2017, applying 50 percent cost reduction. In the “Very Low” scenario, solar is assumed to cost $670 per kilowatt — based on the NREL’s estimates for 2047 (Utility PV — Low).

As for wind, mid-range costs are considered 25 percent under the NREL’s “low” assumption for 2017 wind power. “Very low” wind power costs are assumed to be $927 per kilowatt — based on NREL’s estimates for 2047 wind power — (Land Base Wind, TRG 1 — Low).

At the same time, the “Conservative” assumption for nuclear power is $7,000 per kilowatt, based on Georgia Public Service Commission (PSA).

$670 per kW for solar in 2047 are likely way too pessimistic. DNV-GL, for example, now estimates that solar PV would be at 42–58 US cents per watt in 2050. The most optimistic “very low” scenario for solar, therefore, should be at $420/kW, not $670/kW. Wind energy forecasts are more conservative. Thus, wind energy projections made by Jenkins might be correct.

But taking a look at Jenkins’ envisioned grid supply, in most high-renewable scenarios, the largest part of the renewable electricity is provided by solar power anyway. Thus, underestimating the reduction of solar energy costs means to decisively overestimate total costs of a renewable energy grid.

As for nuclear power, Jenkins’ most pessimistic assumption is that nuclear power costs $7,000 per kilowatt. That is actually overly optimistic. Lazard currently estimates that nuclear power costs between $6,500 and $12,250 per kilowatt. In 2016, estimates were at $5,400–8,200 for nuclear ($8,650 for new US nuclear). This means that nuclear power actually got more expensive. Jenkins doesn’t merely assume that nuclear will reverse this trend someday, but even in his most pessimistic scenario have capital costs that would be considered at the low end of the spectrum today.

To sum it up, Jenkins makes overly optimistic cost assumptions even for his “conservative” scenario regarding nuclear. And he makes overly pessimistic assumptions even for his “low” scenario regarding solar power. So he basically compares an optimistic projection of nuclear power costs to a pessimistic projection of solar power costs and finds that nuclear power is cheaper.

Now that we have looked into some anti-renewable energy propaganda studies, we can spot a set of strategies that is used by anti-renewable propagandists to discredit renewable energy.

Ignore power-to-gas

Even pro-nuclear propagandists are very well aware of the ability to store large amounts of electricity using power-to-gas — they simply ignore it. You find Jenkins, Thernstrom, and Sepulveda mentioning that technology, but then simply go on by only calculating the costs of other, less optimal storage technologies. Sepulveda doesn’t give a reason at all for ignoring power-to-gas, Jenkins and Thernstrom dismiss scenarios that rely on power-to-gas, arguing that it “remains unproven at such large scales,” without explaining why power-to-gas, even though it is proven to work, all of a sudden would stop working if a large number of power-to-gas facilities were built.

Insisting on an inadequate storage strategy to store large amounts of energy, such as insisting on lithium-ion batteries for that task is one way to artificially inflate the costs of going renewable.

Overestimate storage needs

The study Geophysical limits talks about 12 hours of lithium-ion or pumped hydro storage needs for the USA. Wood McKenzie all of a sudden estimates 24 hours of lithium-ion storage needs for the USA, Hans-Werner Sinn estimates 16 TWh of pumped-hydro storage (more than 10 days worth of storage) for Germany, and the Clean Air Task force estimates 36.3 TWh of lithium-ion battery storage needs for California, around 46 days worth of energy storage. While the storage estimates for 12 hours of lithium-ion battery storage are already hard to justify (as there is power-to-gas as an alternative), it is quite obvious that arguing that pumped hydro or lithium-ion storage must store more than a week’s worth of electricity consumption is nonsense and designed to artificially inflate the cost estimates of a 100 percent renewable grid. This works by using the next strategy:

Ignore curtailment

The Clean Air Task Force and Hans-Werner Sinn used the strategy of simply not allowing any curtailment of renewable energy at all. This, of course, inflates the cost of storage enormously. If you do allow curtailment, you can build more wind and solar power plants than usually needed — so you have enough solar and wind power even in times of less wind or sunshine, therefore reducing storage needs. For example, to get to 90 percent solar/wind power in Germany without curtailment, you would need more than 16 TWh of storage. If you accept around 22 percent curtailment, storage needs are reduced from more than 16 TWh to 1.1 TWh.

Overestimate grid expansion needs

Another way of artificially inflating cost estimates for renewable energy is to vastly overestimate the needs of grid expansion. The NREL’s estimate is a grid expansion from 85,000 gigawatt-miles to around 116,000 gigawatt-miles for 77 percent solar and wind power. So even if we calculate that for 100 percent solar and wind power, a further expansion to 125,000 gigawatt-miles might be necessary, the costs remain moderate. 1 mile is roughly 1.61 kilometers. At $1 million per gigawatt-kilometer, therefore, it would cost around $65 billion to expand the grid to 125,000 gigawatt-miles. This puts into perspective the vastly overblown grid expansion estimates by Sepulveda (252,000 gigawatt-miles or 408,000 gigawatt-kilometers at a cost of around $410 billion) and Wood McKenzie (200,000 miles of new HVT at a cost of around $700 billion).

Ignore or underestimate progress

A review of “recent literature” by Jenkins and Thernstrom in 2017 found that getting to near-zero emissions would cost significantly more than including technologies like nuclear power and CCS. One of the studies cited by Jenkins and Thernstrom is a study by Brick and Thernstrom from 2015. This study claims to “test the outer bounds of” future scenarios, assuming rapid and significant cost declines for wind and solar: Capital costs of $1000 per kilowatt and increased costs for nuclear ($6500 per kilowatt).

“In November 2018, however, Lazard considered $6500 per kilowatt the lowest end of the price spectrum for nuclear power, whereas the highest end of the spectrum was $12,250 per kilowatt. At the same time, wind and solar were estimated to cost between $950 and $1250 per kW (solar) and between $1150 and $1550 per kW (wind). Thus, what was considered “rapid and significant cost declines” in 2015, in 2018 was already within reach.

Source: Georg Nitsche /CleanTechnica

Are Li-Ion or Lead-Acid Batteries better for Home Energy Storage?

Until recently the majority of solar-powered homes with energy storage used lead-acid batteries – particularly those that were entirely off-grid, but in the past few years that’s starting to change as an increasing number of companies are offering home energy storage systems using lithium-ion batteries. But which is really better?

Here’s a rundown on the pros and cons of both.

Lead-Acid Batteries

This is the grandaddy of energy storage. Lead-acid batteries have been used to provide backup power for solar homes since at least the 1970s. While they’re similar to conventional car batteries, the batteries used for home energy storage are called deep-cycle batteries since they can be discharged and recharged to greater levels than the batteries in most cars and trucks.

Lead-acid batteries have traditionally cost less than lithium-ion batteries, which made them more attractive for homeowners. However, they have shorter lifespans than lithium-ion batteries.

Moderate climate cycle life comparison
Image source: 2012: White Paper: Comparison of Lead Acid vs. Lithium-Ion for Stationary Energy Storage. Courtesy All Cell Technologies

The cycle life of a lead-acid battery also is lower than lithium-ion batteries. While some lead-acid batteries last as long as 1,000 cycles, others will only last for about 200 cycles of full charge and discharge. Lithium-ion batteries, on the other hand, have cycles of between 1,000 and 4,000 cycles. There are Lithium Ferro Phosphate (LFP) batteries that can last for 10,000+ cycles.

As such, most lead-acid batteries last about five years and have a warranty that reflects that. So, over the lifetime of a solar array, a homeowner will have to replace lead-acid batteries numerous times.

The energy storage efficiency of lead-acid batteries is lower than other energy storage technologies like lithium-ion. Since they’re less efficient, they also can’t charge or discharge as fast as other energy storage systems.

Lead-acid batteries have low discharge capabilities, meaning that draining them of too much energy will cause their ability to store energy to deteriorate quickly. National Renewable Energy Laboratory (NREL) study found that discharging 50 percent of the energy in a lead-acid battery would allow it to complete 1,800 cycles before its energy storage capacity fell significantly. If it was discharged to 80 percent of its capacity, it could only withstand 600 cycles before seeing significant reductions in storage capacity.

Since they’re less efficient at storing energy and can’t discharge fully, lead batteries require more cells and space than lithium-ion batteries do. They’re also much heavier and a bank of lead batteries will require racking and more space than a lithium-ion battery pack will.

Lead also is a toxic, heavy metal and while it’s recyclable, it can still be disposed of improperly. However, it is usually recycled into new batteries.

Lithium-Ion Batteries

Lithium-ion batteries are quickly becoming the battery of choice for many applications, from cordless power tools to laptops and vehicles. They’re prepackaged into solar batteries for homes, too. However, they still have some limitations. The first and foremost has cost.

The up-front cost of lithium-ion batteries is higher than for lead-acid batteries. In May 2018, Tesla lists the cost of its Powerwall in the US at $5,900 or $6,600, including supporting hardware. That’s for a 14 kilowatt-hour (kWh) battery that can deliver up to 7 kilowatts of power on peak demands. The cost doesn’t include the cost of installation, which often runs between $600 and $2,000.

Unsubsidized levelized cost of storage comparison
Image source: Lazard’s Unbusidized Levelized Cost of Storage Comparison

But the cost of lithium-ion is changing rapidly. For the last few years, Lazard has evaluated the costs of energy storage technologies in its Levelized Cost of Storage Analysis reports. In its latest report in Nov. 2017, it found that the installed capital cost of a residential lead-acid battery ranges from $598 to $635 per kilowatt-hour. A lithium-ion battery has installed capital costs between $831 and $1,089 per kilowatt-hour.

Using those figures a 14 kWh lead-acid battery would cost as low as $8,372 and a comparable lithium-ion battery as low as $11,634. But the lower costs of the lead-acid batteries hide a lot of other costs, like shorter lifespans and operating costs.

The cost of battery systems over time will be significantly different. Lazard shows that the cost per megawatt-hour produced by an energy storage system is lower for lithium-ion batteries than lead-acid batteries. A lead-acid battery system produces a megawatt-hour at between $1,160 and $1,239. A lithium-ion battery system, on the other hand, produces a megawatt-hour at between $1,024 and $1,274.

BNEF lithium-ion battery price survey 2010-6
Image source: BNEF’s Lithium-ion battery price survey

The costs of lithium-ion batteries continue to fall as well. In another report, Bloomberg New Energy Finance found that in 2010 lithium-ion batteries were selling for as high as $1,000 per kilowatt-hour saw price drops of more than 20 percent over subsequent years. The average selling price of lithium-ion batteries fell $209 per kilowatt-hour by the end of 2016.

However, Bloomberg’s Mark Chediak noted those prices are chiefly for electric vehicle manufacturers. “Developers of stationary storage systems — like the kind that back up rooftop solar panels — can expect to pay 51 percent more than automakers because of much lower order volumes,” he stated.

For all these reasons its important to talk with local solar and energy storage installers to learn what the current costs of lithium-ion batteries and lead-acid batteries are for a home energy storage system. They can either be used as a stand-alone system or in conjunction with a solar rooftop or ground installation to help offset some or all a home or business’ energy needs.

In terms of lifespan, lithium-ion batteries are expected to operate continually for roughly 10 years, they’re capable of being charged and discharged to much higher levels without degrading significantly. NREL’s research assumed that a Tesla Powerwall could operate for 15 years without significantly losing its ability to store and discharge energy. That’s 5,475 cycles.

Lithium-ion batteries can also charge much faster at higher voltages. While it can take lead-acid batteries up to 16 hours to fully charge, even the slowest charging lithium-ion batteries can fully charge within about four hours.

Then there’s the weight issue. Lithium-ion batteries for home energy storage systems aren’t exactly light but compared to lead, they’re like a feather. A 13.5 kWh Tesla Powerwall weighs 278 pounds. The 1.7 kWh lead-acid battery NREL tested weighed 132 pounds. It would take eight of those batteries to offer the same storage capacity as the Powerwall and together they’d weigh over a thousand pounds!

Conclusion

All in all, lithium-ion batteries offer – apart from the higher price tag – many advantages over lead-acid batteries and they’re getting better and better at this point.

The best way to learn how much a solar battery actually will cost you is to talk with neighbours and local installers. A qualified installer will inform you about battery capacity, power, cycle life, depth of discharge (DoD), round-trip efficiency and warranty.

Source: Chris Meehan / A+ Solar Solutions

Tandem cells approaching 30% efficiency

Scientists at the Helmholtz Zentrum Berlin have taken back the world efficiency record for a perovskite/silicon tandem solar cell, achieving 29.15% with a device measuring 1cm². The record has been confirmed by Fraunhofer ISE, and according to HZB, this means that the 30% efficiency mark is within reach.

Perovskite/silicon tandem cell
Image: Eike Köhnen/HZB
Perovskite/silicon tandem cell
Image: Eike Köhnen/HZB

A group of scientists at the Helmholtz Zentrum Berlin (HZB) has produced a perovskite/silicon tandem cell measured at 29.15% efficiency, a new world record for the technology.

HZB previously held the efficiency record for PS/Si tandem cell efficiency at 25.5%, before UK/Germany based startup Oxford PV pushed further, producing a 28% efficient cell in late 2018. HZB’s new record has been officially certified by Germany’s Fraunhofer ISE, and the group says it will now be targeting the 30% barrier for this technology.

Optimizing the layers

Key to the group’s achievement were improvements in several of the cell layers. “We developed a special electrode contact layer for this cell in collaboration with the group of Prof. Vytautas Getautis (Kaunas University of Technology), and also improved intermediate layers“, explain Eike Köhnen and Amran Al-Ashouri, doctoral students at HZB. The two went on to explain that the record-breaking cell also featured an improved perovskite composition which increased stability improved the balance of currents delivered by the two cells, and an optimized silicon oxide top layer on the bottom Si cell, which improves optical coupling between the two.

Structure of the record-breaking perovskite and silicon cell
Image: Eike Köhnen/HZB
Structure of the record-breaking perovskite and silicon cell
Image: Eike Köhnen/HZB

While the cell measures just 1cm² and was produced using laboratory techniques, the group points out the processes used to produce the cell are “suitable in principle for large surface areas”. Without further information, they state that initial tests have shown promising results for vacuum deposition processes to scale up production of these cells.

Pushing for 30%

The next target for the HZB group is to push this efficiency beyond 30% and closer to the technology’s practical limit of around 35%. According to group leader Steve Albrecht, discussions on the best route to achieving this are already underway. Albrecht’s group at HZB also holds the efficiency record for a perovskite/CIGS tandem cell at 23.26%, and has licensed the technology used in this cell to an unnamed Japanese manufacturer for commercial development.

Source: PV magazine

Tesla cuts prices of Solar Systems to counter drop-In Federal Rebate

Ringing in 2020 brought many changes to the US solar industry, including a drop in the US federal incentive for residential solar systems. The tax credit fell from 30% of the solar system cost to 26% effective January 1st, 2020. This first step-down in the tax credit is a small piece of recognition for just how much the financials for solar have improved over the last 10 years.

Tesla made a serious push on the solar side of its business in 2019, including the introduction of a rental model that lets homeowners add solar to their home for just $100 down and a flat fee per month thereafter. To drive economies of scale, Tesla standardized its solar systems to four sizes.

This week, Tesla responded to the smaller federal incentive by slashing prices across its four options — small, medium, large, and extra-large out-of-the-box solar systems. For example, the after-rebate price of Tesla’s small 3.8 kW solar panel system was slashed from $8,342 to $7,770, a 6.9% drop compared to the price of the system just last week.

The falling prices create what Tesla CEO Elon Musk called a money printer and make the systems even more financially attractive for those considering a rooftop solar system.

To further sweeten the deal, Tesla also beefed up its referral program for solar systems. Customers signing up for a new Tesla Solar system using a referral link will earn a $250 award. The person referring new customers to purchase or rent a new Tesla Solar system also gets a $250 award. It’s not cash but does add a nice perk to the purchase of a system that’s already pretty cool by itself. The awards cannot be redeemed for cash but can be used to purchase more Tesla swag, vehicles, or products.

Source: CleanTechnica

Where there’s smoke, there’s reduced PV output

It’s a minor concern compared to the tragic loss of life, livelihoods and biodiversity caused by the bushfires still ravaging parts of Australia, but reduced output by PV systems due to smoke haze is an unwelcome bi-product of blazes that have burned at a scale and ferocity never seen before.

Smoke and haze have enveloped rural and urban areas alike during unprecedented bushfires this Australian summer, reducing PV output in affected areas. Image: PV magazine, Natalie Filatoff

The haze and fallout from Australia’s catastrophic bushfires are also affecting the output of solar panels from solar farm arrays and rooftop installations — part of the renewable infrastructure that on clear sunny summer days can contribute as much as 50% of energy demand in the National Electricity Market.

The haze has so far been recorded as reducing solar output from residential rooftop systems by up to 45% in particular locations on certain days.

Solar Analytics, a smart-monitoring company that helps rooftop solar-system owners manage their energy consumption and solar generation, monitors 35,000 sites around Australia. It has correlated Air Quality Index readings over recent months with expected irradiance and solar panel output with disturbing results.

On December 10, one of the worst smoke-hazy in Sydney, for example, median energy generated across 2,200 Sydney sites was 22.7 kWh; compared with 26.6 kWh on clear days during the first two weeks of December. 

This 15% decrease was surpassed on December 21 — a cloud-free but extremely smoke-affected day — when Sydney rooftops showed a 27% decrease in output compared to a clear December day in 2018.

New Year’s Day in Canberra had the unhappy distinction of choking on a particulate matter (PM 2.5) reading of 347 micrograms/cubic metre (μg/m³), when a normal December/January day might register between 3 μg/m³ and 10 μg/m³; Solar Analytics recorded a 45% associated decrease in the city’s rooftop solar output. 

Professor Gary Rosengarten of the School of Engineering at RMIT explains that the decrease is caused by light being absorbed, scattered or reflected by the particulate matter which makes up part of the haze. He tells PV magazine that you can see the effect in that “there’s not enough direct light to cast a sharp shadow”. He adds, “That also decreases the amount of light hitting directly on the PV panel.”

Victor Depoorter, Technical Director at solar forecasting company Proa Analytics says that on certain days, his company has observed smoke haze impacting all of the large-scale solar projects it’s monitoring, from Kidston Solar project in Far North Queensland to Tailem Bend Solar Power Project in South Australia.

Utility-scale solar takes a hit

In areas most affected, he says, “We could see solar farm output reduced by 10-15%.”

In terms of forecasting, Depoorter tells PV magazine, “It has been challenging to distinguish between clouds and clear sky on some days at some of our sites”, which is an issue he says because the behaviour of irradiance is not the same given haze or cloud cover.

“Water-carrying clouds, the big, fluffy, white clouds which are the most common here in Australia, have a very high reflection — they can almost completely block irradiance. That’s not the same case for smoke haze; the scattering mechanisms work differently.”

The forecasting technology developed by Depoorter and Matthew Jeppesen, Managing Director of Proa Analytics, now allows Kidston and Tailem Bend solar farms to self-forecast their output into the NEM, enabling the Australian Energy Market Operator to better integrate their energy flow with that of other generators in the grid. Proa Analytics is working towards achieving the same accreditation for other solar farms in the NEM.

So far, says Depoorter, the variations introduced by smoke haze have not been sufficient or frequent enough for the company to consider training its algorithms to definitively distinguish smoke haze from a cloud.

He hopes that will never become necessary.

Muddying the output

In the meantime, smoke haze settles as dust in affected areas, which Depoorter says can reduce solar panel output long after the air has cleared.

“If dust and ash mix with water from otherwise largely welcome rain, the result can be a film or patches of mud, which will also reduce solar panel efficiency.”

When the cars in any area look dirty from the combined effects of dust and rain, Depoorter says nearby solar installations are likely to be equally affected and need to be cleaned to restore optimal output.

“Unless people are reactive in cleaning their solar panels after these events,” says Depoorter, “losses in output due to soiling will be increased for some time” — until they are deliberately cleaned, or substantial rain washes away the grime.

Source: PV magazine

Longi claims 22.38% efficiency world record for PERC mono panel

The Chinese manufacturer said the result was confirmed by Germany’s TÜV Rheinland. The achievement beats the company’s previous record of 21.65%, set last month.

Chinese solar manufacturer Longi says it has set a world record for monocrystalline passivated emitter rear contact (PERC) module efficiency, at 22.38%. The figure overhauls the previous landmark of 21.65%, set by Longi last month.

The result, which the company says is a record for monocrystalline, was confirmed by German testing and certification provider TÜV Rheinland. No details were provided on how the improvement was achieved.

Longi has long dominated the milestones for that type of solar module and had set a record of 20.83% in November 2018 which at the time beat its previous achievements of 20.66% and 20.41%.

In February 2018, Longi announced an efficiency of 23.6% for monocrystalline PERC cells. That result also surpassed the company’s previous record of 22.71%, achieved in October 2017.

“Longi has continuously pushed the module efficiency limits of our high performance monocrystalline products to further improve the price-performance ratio,” the company announced. “This breakthrough once again confirms the development headroom of monocrystalline module technology.”

Source: PV magazine

Is First Solar stalled in innovation and growth? Markus Beck, a thin-film solar expert and former chief technologist at First Solar, provides an industry perspective on the fate of thin-film PV in the United States.

In light of thin-film solar’s very own “House of Cards” (Hanergy) and the recent troubles of companies that are barely hanging on, a closer look at the checkered past of the thin-film solar industry is warranted.

Thin-film solar accounts for less than 5% of the global module supply. Yet, as First Solar has demonstrated, thin-film solar can be 2.5- to 3-times more capital efficient than c-Si, when accounting for the entire process flow from polysilicon through module assembly.

So, why aren’t investors attracted to thin-film solar?

While it may be an uncomfortable answer, the truth is that thin-film solar manufacturers, with the exception of First Solar, have failed to demonstrate economically viable technologies and operations. That single thin-film solar success story is overshadowed by dozens of failures – Abound, Nanosolar, Primestar and Solyndra serve as the most spectacular examples.

Operational excellence

Has China achieved thin-film operational excellence?

It remains to be seen how the Hanergy drama will end – perhaps the Chinese leadership is convinced that their Chinese operations have acquired the skills to continue on their own, no longer requiring the know-how of the German and U.S. operations. If the technology transfer has been successful, Hanergy’s operations in China should demonstrate sustainable manufacturing.

However, given the status of CNBM Group’s Avancis and CTF Solar, as well as CHN Energy’s NICE Solar Energy operations, there is little confidence that Hanergy has mastered thin-film PV manufacturing. Operational excellence is a vastly underestimated element of economically viable manufacturing.

The fact that the processes championed by the above companies are poorly suited for high-volume manufacturing only adds to the problem. Manufacturing approaches rooted in academia and technology of the 1990s or early 2000s are not compatible with today’s, and even less so with future market requirements.

Even if the necessary innovations have been identified, the question remains whether these organizations have the will to implement them or the capital and time available to transition and ramp these highly advanced factories.

Thin-film mistakes

While there are several mistakes for which the individual companies bear sole responsibility, learning from these mistakes seems to be a virtue not exhibited by those in charge. There are a few individuals who understood how to leverage past expertise, capitalize on the lessons learned in both fundamental and applied research, and enable success stories to match that of First Solar. Due to lack of access to sufficient and patient capital, none of these ever saw the light of day.

‘Money makes the world go round’ is a recurring theme, and if we look back at the history of other emerging industries, there always is a gold rush phase giving rise to too many companies. However, what is atypical in the case of thin-film solar is that after about 25 years we are left with just one success – there should have been at least five or six. Why was access to capital such a deciding factor?

In the age of publicly traded companies and quarterly earnings, analysts (sitting in a office without any hands-on experience regarding the industries they evaluate) and investors do not like to invest in manufacturing – in particular as the assets are highly specific to the organization, and therefore next-to-impossible to sell. Asset-less or at least asset-light sectors, on the other hand, continue to see a huge influx of capital. As such the thin-film solar module manufacturing sector was doomed from the get-go, at least in Europe and the United States.

Chinese policy

In comparison, China operates at a far more long-term and strategic level. If the country’s leadership has identified a particular sector as of strategic importance – for example, solar, Li-ion batteries, and electric mobility – it creates a policy environment that encourages massive investment into these sectors. This, in turn, creates a large number of private companies competing with one another for global market dominance.

A high percentage of these companies are ultimately nonviable. The best ones stand out and an entire ecosystem as well as a lot of expertise gets created, all to be fully leveraged by the small number of survivors. Since Europe or the United States have nothing to offer to their industries, the end result is indeed global dominance by the survivors of China’s internal runoff. At this point, the Chinese government rapidly scales back the subsidies and incentives, shifting them to the next industrial sector of strategic importance.

It is, quite frankly, mind-boggling that the West has failed to come up with a strategy to counter the Chinese. How many more sectors are we willing to abandon? Wall Street alone cannot sustain the U.S. economy without manufacturing. Data from the U.S. Department of Commerce shows that the manufacturing sector share of the U.S. economy has fallen to a record low of 11%, steadily declining from 25% in the 1960s.

Are we willing to see the demise of the automotive industry next? Are we willing to sacrifice a clean environment and high-paying manufacturing jobs at the altar of Wall Street and its market indices?

Valuable lessons

It’s not too late for thin-film solar in the United States.

As mentioned in the first paragraph, thin-film solar is superior in its capital efficiency. In addition, thin-film solar has a three to five times lower CO2 footprint and a two to five times higher energy return on energy investment compared to c-Si. Since the financial markets are unlikely to invest in thin-film solar on their own, politicians owe it to their constituents to create a long term industrial policy framework that directs investment into thin-film solar, and in turn creates high-paying jobs.

It is not too late – at present, Europe and the United States still hold technological leadership in thin-film solar. The blueprints for economically viable thin-film module manufacturing exist. The recent insolvency of Calyxo (December 2019), failure of Siva Power (October 2019), and troubles at CNBM, NICE and Solar Frontier are insignificant in the bigger picture, but offer valuable lessons.

Absent serious competition, First Solar might be next: It took First Solar more than four years to abandon small modules and commit to its Series 6 panels, although the concept had been conceived and proven to be viable. First Solar has not raised its hero-cell efficiency and has reduced its R&D expenditures over the last four years.

Is this the legacy we want to leave? Is this what we want all the excellent thin-film R&D in Europe and the United States to amount to?

Source: PV magazine

A team at the U.S. National Renewable Energy Laboratory has come up with a new process that would reduce the production cost of highly expensive – and highly efficient – gallium arsenide cells.

Image: NASA

Solar researchers on both sides of the Pacific are looking to space for better solar cells. In separate announcements it has emerged Chinese module manufacturer Jinko Solar and the U.S. National Renewable Energy Laboratory (NREL) are both exploring the production of PV technologies used in space to improve solar power returns back on Earth.

At the NREL, researchers claim to have made a breakthrough in III-V cell technology which they say could bring down the costs of the highly efficient – and very expensive – cells quite significantly. The team said it has grown aluminum indium phosphide (AlInP) and aluminum gallium indium phosphide (AlGaInP) in a hydride vapor phase epitaxy reactor.

Referring to the groups of the periodic table in which such materials are found, III-V solar cells are commonly used in space applications, such as to power satellites or the Mars Rover. More efficient than the silicon wafer-based cells used on earth, they are prohibitively expensive.

An epi-taxing problem

The expense is largely bound up in the two-hours-per-cell metalorganic vapor phase epitaxy (MOVPE) production process, which involves several chemical vapors being deposited onto a substrate in a single chamber.

A partial solution was suggested by the NREL with its dynamic hydride vapor phase epitaxy (D-HVPE) process which reduced the time required to less than a minute per cell. However, the inability to incorporate an aluminum content layer meant cell efficiency dropped.

Using D-HVPE, the NREL made solar cells from gallium arsenide (GaAs) and gallium indium phosphide (GaInP) with the latter working as a “window layer” to passivate the front while permitting light to pass through to the GaAs absorber layer. However, the GaInP layer is not as transparent as the AlInP layer which can easily be grown in a MOVPE reactor.

The world efficiency record for MOVPE-grown GaAs solar cells with AlInP window layers is 29.1%. For GaInP alternatives, the maximum figure for HVPE-grown solar cells is estimated to be 27%.

Separate advances

“There’s a decent body of literature that suggests that people would never be able to grow these compounds with hydride vapor phase epitaxy,” said Kevin Schulte, a scientist in the NREL’s Materials Applications & Performance Center and lead author of a paper highlighting the new research. “That’s one of the reasons a lot of the III-V industry has gone with [MOVPE], which is the dominant III-V growth technique.” Referring to the latest development, Schulte added: “This innovation changes things.”

The NREL team said they had been working to improve the economics of GaAs cells by moving the technology forward incrementally. Firstly, the D-HVPE process reduced costs and now aluminum growth means improved efficiency. With aluminum added to the D-HVPE mix, the scientists said they should be able to reach parity with MOVPE solar cells.

The laboratory last year produced a 25.3% efficient GaAs cell using D-HVPE. Kelsey Horowitz, part of the techno economic analysis group at the NREL’s Strategic Energy Analysis Center, suggested D-HVPE cells made at scale could generate electricity at $0.20-0.80/W, with the help of some tweaks and said applications such as electric vehicle integration, systems for roofs not strong enough to support a silicon PV array, and portable or wearable solar panels could be viable at that cost. “There are these intermediate markets where higher prices can be tolerated,” she said.

“The HVPE process is a cheaper process,” said Aaron Ptak, a senior scientist at the NREL’s National Center for Photovoltaics. “Now we’ve shown a pathway to the same efficiency that’s the same as the other guys but with a cheaper technique. Before, we were somewhat less efficient but cheaper. Now there’s the possibility of being exactly as efficient and cheaper.”

Jinko

Across the Pacific, Jinko Solar has signed a memorandum of understanding with the Shanghai Institute of Space Power-Sources to jointly develop high-efficiency solar cell technology. The solar manufacturer said it will use a more robust silicon wafer as the supporting substrate and bottom cell.

Jinko did not provide any further detail regarding cell technology but said its high-efficiency solar tech would take advantage of the cheap availability of silicon wafers and would easily transfer into large scale manufacturing.

“The strategic cooperation with [the] Shanghai Institute of Space Power-Sources has great importance,” said Jin Hao, VP of Jinko Solar. “In the future we will continue to increase technical cooperation, leading our industry in the name of technical innovation and providing more efficient solar panels with a wider range of choices for global customers.”

Jinko predicted the new cell technology would prompt a higher conversion rate than current technologies but said more research was needed.

Source: PV magazine

What are thin-film solar panels?

Thin-film solar panels are made with solar cells that have light-absorbing layers about 350 times smaller than that of a standard silicon panel. Because of their narrow design and the efficient semi-conductor built into their cells, thin-film solar cells are the lightest PV cell you can find while still maintaining strong durability.

Thin-film solar panels are typically made with one of the following technologies:

  • Cadmium Telluride (CdTe)
  • Amorphous Silicon (a-Si)
  • Copper Gallium Indium Diselenide (CIGS)
  • Gallium Arsenide (GaAs)
  • Organic photovoltaic

Cadmium Telluride (CdTe) solar panels

Cadmium Telluride (CdTe) solar panel

Cadmium Telluride (CdTe) is the most widely used thin-film technology. CdTe holds roughly 50% of the market share for thin-film solar panels. CdTe thin-film panels are made from several thin layers: one main energy-producing layer made from the compound cadmium telluride, and surrounding layers for electricity conduction and collection. CdTe contains significant amounts of Cadmium – an element with relative toxicity. First Solar is the top innovator and seller in this space.

CdTe is the most common thin-film solar technology, mainly because of First Solar’s utility-scale dominance. In 2016, First Solar hit a CdTe world-record cell efficiency of 22.1%, although its modules average 17%. The Series 6 module should produce 420 W, and its smaller Series 4 modules peaked at about 100 W

Advantages and disadvantages of cadmium telluride solar panels

One of the most exciting benefits of CdTe panels is their ability to absorb sunlight close to an ideal wavelength. Functionally, this means that CdTe solar panels can capture energy at shorter wavelengths than traditional silicon panels can, which matches the natural wavelengths of sunlight closely for optimal sunlight to electricity conversion. Additionally, cadmium telluride panels can be manufactured at low costs, as cadmium is abundant and generated as a by-product of key industrial materials like zinc.

The main concern with CdTe panels is pollution. Cadmium by itself is one of the most toxic materials known, and cadmium telluride also has some toxic properties. Currently, the general opinion on using cadmium telluride is that it is not harmful to humans or the environment in residential or industrial rooftop applications, but disposal of old CdTe panels continues to be a concern.

Amorphous Silicon (a-Si) solar panels

Amorphous Silicon (a-Si) solar panel

Amorphous Silicon (a-Si) is the second most popular thin-film option after CdTe. Amorphous Silicon is the most similar technology to that of a standard silicon wafer panel. Amorphous Silicon is a much better option than its counterparts (CdTe, CIGS) in terms of toxicity and durability, but it is less efficient and is typically used for small load requirements like consumer electronics. The quest for scale is always a hindrance for a-Si.

Amorphous Silicon (a-Si) is the oldest thin-film technology. It uses chemical vapour deposition to place a thin layer of silicon onto the glass, plastic or metal base. It is nontoxic, absorbs a wide range of the light spectrum and performs well in low light but loses efficiency quickly. One layer of silicon on an amorphous solar panel can be as thin as 1 micrometer, which is much thinner than a human hair.

Advantages and disadvantages of amorphous solar panels

Unlike many other thin-film panel options, amorphous silicon panels use minimal toxic materials. When compared mono- or poly-crystalline solar panels, amorphous panels use much less silicon. Amorphous silicon solar panels are also bendable and less subject to cracks than traditional panels constructed from solid wafers of silicon.

The ongoing challenge with amorphous solar panels is their low efficiency. Due to complicated thermodynamics and the degradation of amorphous silicon, among other factors, amorphous solar cells are less than half as efficient as mono- or poly-crystalline solar panels. The highest efficiency on record for a-Si is 13.6%. Attempts to raise the efficiency of amorphous panels by stacking several layers, each in tune to different wavelengths of light, has proven somewhat effective, but the overall efficiency of these types of thin-film panels is low compared to other options.

Copper Gallium Indium Diselenide (CIGS) solar panels

Copper Gallium Indium Diselenide (CIGS) solar panel

Laboratory CIGS cells have reached efficiency highs of 22.4%. However, these performance metrics are not yet possible at scale. The primary manufacturer of CIGS cells was Solyndra (which went bankrupt in 2011). Today, the leader is Solar Frontier. MiaSolé also manufactures CIGS panels in the U.S. and China.

CIGS solar cells are made from a compound called copper gallium indium diselenide sandwiched between conductive layers. This material can be deposited on substrates such as glass, plastic, steel, and aluminum, and when deposited on a flexible backing, the layers are thin enough to allow full-panel flexibility.

Advantages and disadvantages of CIGS solar panels

Unlike most thin-film solar technologies, CIGS solar panels offer a potentially competitive efficiency to traditional silicon panels. Solar Frontier has a 22.9% CIS cell efficiency record, while its full modules average lower and peak at 180 W. MiaSolé’s flexible CIGS thin-film modules average 16.5% efficiency and may peak at 250 W.

CIGS cells also use the toxic chemical cadmium. However, CdTe panels have a higher percentage of cadmium, and CIGS cells are a relatively responsible thin-film option for the environment. Even better, in some models, the cadmium is completely removed in favour of zinc.

The primary disadvantage of CIGS panels are their price. While CIGS solar panels are an exciting option, they are currently very expensive to produce, to the point where they can’t compete with traditional silicon or cadmium telluride panels. Production costs continue to be an issue for the CIGS solar panel market.

Gallium Arsenide (GaAs) solar panels

Gallium Arsenide (GaAs) solar panel

Gallium Arsenide (GaAs) is a costly technology. GaAs holds a world record of 29.1% efficiency for all single-junction solar cells and 31.6% for dual junction solar cells. GaAs is primarily used on spacecrafts and is meant for versatile, mass-scale installments of PV energy in unusual environments.

One of the leading companies in GaAs cells is Alta Devices. Alta Devices solar cells offer an exceptional combination of high efficiency, flexibility, thinness, and low weight. In addition, the product is highly configurable to meet your physical, mechanical and electrical requirements. These attributes make Alta Devices solar ideal for HALE aircraft, allowing them to fly longer, higher, and at more latitudes than competing solar technologies.

Advantages and disadvantages of GaAs cells

The backbone of our entire technology is gallium arsenide (GaAs), which is an III-V semiconductor with a Zinc-Blende crystal structure. GaAs solar cells were first developed in the early 1970s and have several unique advantages. GaAs is naturally robust to moisture and UV radiation, making it very durable. It has a wide and direct bandgap which allows for more efficient photon absorption and high output power density. Finally, it has a low-temperature coefficient and strong low light performance.

Like CIGS solar panels, the most significant disadvantage of GaAs cells is its price. GaAs solutions are costly, which will make them suitable for niche markets like space, HALE and small unmanned planes/drones. Based on its cost, GaAs cells can’t compete with traditional silicon or cadmium telluride panels.

Organic photovoltaic solar panels

Organic photovoltaic solar panel

Organic photovoltaic (OPV) cells use conductive organic polymers or small organic molecules to produce electricity. In an organic photovoltaic cell, several layers of thin organic vapour or solution are deposited and held between two electrodes to carry an electrical current.

Advantages and disadvantages of organic PV cells

The building-integrated photovoltaic (BIPV) market has the most to benefit from OPV cells. Due to the ability to use various absorbers in an organic cell, OPV devices can be coloured in several ways, or even made transparent, which has many applications in unique BIPV solar solutions. The materials needed to build organic solar cells are also abundant, leading to low manufacturing costs and subsequently, low market prices.

Like other thin-film options, organic photovoltaic cells currently operate at relatively low efficiencies. OPVs have been constructed with about 11% efficiency ratings, but scaling module production up while keeping efficiencies high is a problem for the technology. Much of the research currently surrounding OPVs is on how to boost their efficiency.

An additional issue with OPV technology is a shorter lifespan than both other thin-film options and traditional mono-or poly-crystalline panels. Cell degradation that doesn’t occur in inorganic modules is an ongoing struggle for organically-based photovoltaic products.

Efficiency Comparison Thin-film and crystalline silicon modules

Comparatively, a typical 60-cell crystalline silicon (c-Si) module averages a power output between 250 and 350 W with an efficiency between 17 and 18 %, with high-efficiency brands performing even better. One would need more thin-film modules and more area to produce the same power as a smaller group of c-Si. Crystalline silicon modules are just more consistently dependable for the majority of solar markets, and that’s why they are the dominant panel choice.

Efficiency comparison thin-film and crystalline silicon modules

The future of the thin-film market

No single thin-film brand appears to be making a grab for c-Si’s market share in the United States. First Solar is expanding its CdTe manufacturing, but that’s because it has found that utility-scale sweet spot and is dominating globally. Most CIGS and CIS manufacturers market themselves as niche products.

MiaSolé semi-rigid CIGS modules were initially designed for commercial rooftops, but the company has since branched out into emerging markets like transportation and commercial trucks. When it comes to traditional solar applications, MiaSolé’s best play is its lightweight.

A Mia Solé flexible CIGS module

CIGS manufacturer Sunflare also works with nontraditional solar markets like transportation, marine and modular/tiny home applications. The company has been working to improve the manufacturing process at its plants in Sweden and China to increase thin-film adoption.

Source: A+ Solar Solutions / EnergySage / Solar Power World