From time to time we get larger solar storms. One of the last major ones was the one in March 1989 geomagnetic storm which caused a nine hour power cut in Quebec. That happened because they weren’t prepared for it. Modern power supplies are hardened against this, making such events much less likely.

There were earlier studies suggesting widespread damage to transformers which could cause months to years to repair, widespread power supply problems that would take a long time to resolve, and trillions of dollars of damage, so a large economic impact.

However later studies found the newer transformers are more resilient than previously thought. A major solar storm could lead to some localized power cuts for hours. But not for weeks or months.

Some of the older transformers would be damaged, it’s true. But there’s a lot of redundancy in the system. Many of the areas supplied by a damaged transformer might not even get any power cuts because the system would reconfigure and other transformers would be able to step in and take up the slack.

As for the cost, a major solar storm could cause tens of billions of dollars of damage in the US, maybe over $100 billion, similar to the effect of a large hurricane. By comparison, ordinary minor power outages cause over $100 billion of economic effects every year.

The risk of solar storms was never that we end up in a new dark age. That’s massive journalistic hype plus exaggerated pseudoscience.

The solar storms only causes magnetic effects in very long wires, in a US survey the maximum gradient was 27.2 V/km at a site in Maine which is 0.0272 volts per meter, which is not going to do anything to a car or smartphone, say, unless you connect it to a very long cable. . 30% of the US long distance cables would have voltages of at least 1 volt per kilometer in a record breaking once per century type solar storm.

The risk is only to the hardware connected to those long cables. They have step up and step down transformers to adjust the voltage - are not connected directly to anything else. So the risk is just for those transformers.

This is a shorter version of my


The magnetic fields are very weak - but spread over huge distances. They don't produce EMPs despite the many blog posts you may see claiming they do. Electricity companies need to protect certain machines, step up / down transformers, because they are attached to long wires tens or hundreds of kilometers long.

The maximum potential of0.0272 volts per meter is not going to do anything to a car or smartphone, say, unless you connect it to a very long cable.

0.0272 volts per meter is 27.2 volts per kilometer. Even that is minute compared to the 345,000 volts in high voltage transmission lines - it needs to be tens or hundreds of kilometers long to be an issue and only some wires have these problems at all. It's only an issue of the underlying geology is insulating - if it is conductive, e.g. clays or shales then no voltage is generated. If it is resistive like granite then it can build up quite a voltage in the wire that isn't matched by the same voltage variation in the ground below the cable - which is what causes the problems.

But those big machines are triply redundant and are no longer as vulnerable as they were in 1989, the only time we have ever had a powercut caused by a solar storm - for 9 hours in Quebec. Even if we have a once in a century type Carrington event, it won't do anything.


Old step down transfprmers used to be vulnerable and that is what caused the power cuts in Quebec. These are multi-million dollar machines and it take months to build each one. That's what lead to those predictions of trillion dollar blackouts lasting for months.

However, manufacturers learnt from the Quebec power cut and modern transformers are hardened against that. The older studies didn't take account of that properly. Newer studies do take account of this resilience, and find that only the older models and ones near the end of their life are at risk in a Carrington type event.

That is why it got downgraded from trillions of dollars to tens of bilions of dollars and from a country wide blackout of several months to short local powercuts of a few hours, and most areas don't have them at all though replacing the damaged transformers would be an expensive job. But most are triply redundant anyway so you'd deal with it by configuring the power around the ones that had failed until you got the new ones up and running.

Also researchers found that only some power lines are at risk. If the underlying rock is conductive, as it is over most of the US for instance, then not much of a magnetic field can build up in the power lines and the lines are protected.

This shows the effect of the underlying rock on a small part of the grid around the city of Denver. Power lines are colour coded according to the maximum voltage expected in a once per century solar storm.

The dots show the survey sites used as the basis of their predictions, the coloured lines show the predicted maximum voltage in each grid line. Most of the area is low risk because the Denver basin is conductive, but some areas are at high risk due to the resistive Rocky mountain material of crystalline rock. A 100‐year Geoelectric Hazard Analysis for the U.S. High‐Voltage Power Grid

The voltage that builds up in a once per century solar storm depends on the resistance or conductance of the underlying rock and the length of the power line. A long powerline over resistant rock such as crystaline metamorphic rock is most at risk. A short powerline over conductive rock such as shales or clays is least at risk. The risk also depends on the latitude, high latitude regions are higher risk. So the regional risk varies hugely, and work on making the grid more resilient can focus on the most at risk regions.

Electrical Conductivity

Our sun goes through an eleven year cycle of more, and then less sunspots. Most solar storms happen when there are most sunspots. However, the largest ones can happen at any time in the solar cycle. Solar storms are not unusual during solar minimum, though it’s rare.

In detail, for those interested in the techy details:


You can get an idea of the worst to expect from solar storms from their warning levels here NOAA Space Weather Scales

Here a solar cycle is 22 years.

Geomagnetic storms (4 per solar cycle)

Summary: some power cuts, repairs to the power grid, satellite glitches (e.g. GPS), and can affect low frequency radio navigation (not a big issue for most of us as we don’t use the low frequencies that bounce off the ionosphere any more).

Solar Radiation Storms (fewer than 1 per solar cycle)

Summary: not a good time for an astronaut to do a spacewalk as they would be exposed to high levels of radiation. Also passengers in planes flying at high altitudes over the polar regions may get a radiation dose. No effect at ground level - the radiation can’t get through our atmosphere. Complete blackout of high frequency radio communications and major navigation issues.

Radio blackouts (fewer than 1 per solar cycle)

Complete radio blackout over sunlit side of Earth lasting for hours. No high freuqency radio contact with mariners and planes en route. Navigation errors.

Of all of those issues, the biggy is the effect on power supplies. The others are minor issues in the larger scheme of things, over in a day or a few hours. Also there isn’t that much you can do about them, except hardening satellites against solar storms. But we can harden our power supplies.

BTW though they don’t say it there, you also may get temporary interference with landline telephone communications - but not cell phone communication.


The alarmist articles say that many transformers would be affected and it would take years to repair them all, during which many people would have no power. What do the detailed studies say?

There’s a good overview paper here: The Economic Impact of Space Weather: Where Do We Stand? and I found these sources by following through from that. It suggests the economic impact could be large so if you want the details see that report - similar in economic effect to a major hurricane or worse. With widespread power cuts. But modern studies don’t support the idea of large numbers of transformers destroyed and blackouts lasting for years. Rather, the blackouts would last for hours, though repairs would take weeks to months.

Amongst many cites there, they cite two major studies, on the matter of impact on transformers and the power grid, one from the UK and one for the US. This one from 2013 studying the effects on the UK concluded, for a superstorm:

“The reasonable worst case scenario would have a significant impact on the national electricity grid. Modelling indicates around six super grid transformers in England and Wales and a further seven grid transformers in Scotland could be damaged through geomagnetic disturbances and taken out of service. The time to repair would be between weeks and months. In addition, current estimates indicate a potential for some local electricity interruptions of a few hours. Because most nodes have more than one transformer available, not all these failures would lead to a disconnection event. However, National Grid’s analysis is that around two nodes in Great Britain could experience disconnection.”

That’s two nodes out of over 600 that could experience disconnection (as they explain later)


  • 6 transformers affected in England, 7 in Scotland
  • Repair time of weeks to months.
  • Some local electricity interruptions of a few hours
  • Most nodes have more than one transformer so not all the transformer failures would lead to customers getting disconnected from the power grid.


A 2012 report for the US comes to a similar conclusion. The most likely effect is voltage instability leading to powercuts that would be resolved in a matter of hours. Some older transformers would be damaged but they doesn’t support the conclusion of earlier reports that large numbers of transformers would be damaged.

Quoting from the conclusion of their executive summary:

“The most likely worst‐case system impacts from a severe GMD event and corresponding GIC flow is voltage instability caused by a significant loss of reactive power support11 simultaneous to a dramatic increase in reactive power demand. Loss of reactive power support can be caused by the unavailability of shunt compensation devices (e.g., shunt capacitor banks, SVCs) due to harmonic distortions generated by transformer half‐cycle saturation. Noteworthy is that the lack of sufficient reactive power support, and unexpected relay operation removing shunt compensation devices was a primary contributor to the 1989 Hydro‐Québec GMD‐induced blackout. “

“NERC recognizes that other studies have indicated a severe GMD event would result in the failure of a large number of EHV transformers. The work of the GMD Task Force documented in this report does not support this result for reasons detailed in Chapter 5 (Power Transformers), and Chapter 8 (Power System Analysis). Instead, voltage instability is the far more likely result of a severe GMD storm, although older transformers of a certain design and transformers near the end of operational life could experience damage, which is also detailed in Chapter 5 (Power Transformers).”

I.e. transformers will not be damaged, except possibly some older ones and transformers near the end of their operational life.

The installed base of the EHT is around 2000 transformers ( with a maximum voltage rating greater than or equal to 345 kV) according to this report, and tens of thousands of smaller ones:

“The United States is one of the world’s largest markets for power transformers, with an estimated market value of over $1 billion USD in 2010, or almost 20 percent of the global market. The United States also holds the largest installed base of LPTs in the world. Using certain analysis and modeling tools, various sources estimate that the number EHV LPTs in the United States to be approximately 2,000.78 While the estimated total number of LPTs (capacity rating of 100 MVA and above) installed in the United States is unavailable, it could be in the range of tens of thousands, including LPTs that are located in medium voltage transmission lines with a primary voltage rating of 115 kV. Figure 11 represents the historical annual installment of LPTs in the United States, not including replacement demand. “


In The Economic Impact of Space Weather: Where Do We Stand? then this was the main focus of the paper.

Some reports suggest economic effects from $100 billions through to trillions.

“The total economic loss varies between $0.5 tn and $2.7 tn based on calculations examining disruption to the global supply chain. An alternative methodology finds a total loss of $140–$613 bn. This is lower as it accounts for the “dynamic response of the global economy.” Losses to U.S. GDP are estimated to range between $136 bn and $613 bn over five years following the space weather event, with the worst affected states being Illinois and New York.”

That’s for a “worst-case scenario where there is significant transformer damage causing prolonged power outage”

However other studies based on power cuts as the main economic effects come to much lower figures. The economic effect of the August 14 2003 north east blackout was $4- 10 bn. The paper points out that there's an annual economic loss of between $104 bn and $164 bn from short blackouts.

“The RAE report focuses on the United Kingdom in particular, and its conclusion is reached on the basis of studies and assessments undertaken by the National Grid. In particular, it is noted that since 1997, newly installed transformers have employed a more GIC-resistant design, which strengthens resilience. Outages are therefore measured in hours to days, rather than months, but such events still have a considerable economic impact through primary and secondary losses.[149] As examples, the economic impact of Hurricane Katrina was estimated to be $81–$125 bn[150] and the August 14, 2003 northeast blackout was $4–$10 bn.[151] Analyses of historical blackout events in the United States indicate that even short blackouts, which occur several times during a year in the United States, sum up to an annual economic loss between $104 bn and $164 bn.[152] These figures are based on insurance industry pricing models for business interruption insurance. (Details on data and methodology are not publicly available.)”

Still if you can save $10 bn, this is something that is well worth planning for to prevent!

There would also be an economic effect on satellites and they cite a 2006 study that reached a figure of $70 bn for those.

On GPS, they say that

“During a major storm, complete loss of GNSS service for one day is estimated, with extended loss of service for three days. Although many systems can revert to backup technologies, the impact of the reduced accuracy over a prolonged multiday outage is not well understood or verified.”

There would also be effects on planes. They would lose GPS and planes in flight would be permitted to continue their flights but planes on the ground would not be able to take off. So that would have an economic effect. It could also affect pipelines and railways, but there they just suggest that it would be possible to quantify the effects in future research, and don’t give any figures.


Also new research on the Carrington event suggests it wasn't as major an event as it seemed to early researchers, probably no more major than several solar storms since then. The early data was misinterpreted.

Indeed there is skepticism about whether such a major event as the early papers estimated are even possible. See Comment on “The extreme magnetic storm of 1–2 September 1859” by B. T. Tsurutani, W. D. Gonzalez, G. S. Lakhina, and S. Alex

This paper summarizes it as:

“The major risk is attached to power distribution systems and there is disagreement as to the severity of the technological footprint. This strongly controls the economic impact. Consequently, urgent work is required to better quantify the risk of future space weather events.”

It’s - a good overview review paper, The Economic Impact of Space Weather: Where Do We Stand?


Power companies can prepare and build in resilience to reduce the effect on the grids of a solar storm. To help with this, the USGS has prepared geoelectric hazard maps for most of the US, the Northern region a the eastern region down to Florida.

The hazards depend on the underlying geology. If the underlying rock is conductive then not much magnetic field builds up, if it is resistive then a lot of magnetic field builds up above it. They used observations of the effects of magnetic storms to work out which areas are most vulnerable. These are example observations for 14th March 1989.

Observed (a) geomagnetic field, (b) geoelectric field, (c) geoelectrically induced voltage, and (d) average line electric field at 01:00 UTC on 14 March 1989. T

They then used this together with an analysis of geomagnetic storms to make a once per century extreme prediction of the geoelectric field.

During a one in a century extreme solar storm, 322 of the 1079 sites, nearly 30% of the surveyed land area, have an estimated geoelectric field of at least one volt per kilometer.

They then took this one step further and mapped this onto the grid system. The analysis will need to be redone if the network is changed. They worked out the effects independently for all 17,258 transmission lines. As you can see it is localized, some particular lines that will be affected more than others (the brighter lines in these diagrams):

Once per century extreme geomagnetic storm predictions for

a) transmission line voltages - the voltage difference along the line (more for longer lines in a constant electric field)
b) transmission line electric field - voltage per kilometer.

The field strengths in volts per kilometer vary hugely,

Once‐per‐century geoelectric field strengths span more than 3 orders of magnitude from a minimum of 0.02 V/km at a site in Idaho to a maximum of 27.2 V/km at a site in Maine, with nearly 30% of the surveyed land area exceeding 1 V/km.

They discuss several regions in more detail, one is a site near Denver Colorado which shows the importance of a high resolution survey.

We saw this map in the introduction, here it is again, it shows the effect of the underlying rock on a small part of the grid around the city of Denver. Power lines are colour coded according to the maximum voltage expected in a once per century solar storm.

The dots show the survey sites used as the basis of their predictions, the coloured lines show the predicted maximum voltage in each grid line. Most of the area is low risk because the Denver basin is conductive, but some areas are at high risk due to the resistive Rocky mountain material of crystalline rock.

They highlighted the need for more detailed surveys. The yellow dot near the middle of this map is a single survey point that showed up a potential hazard which would otherwise be missed. However, it may well over-estimate the effect on the grid also, as higher resolution would be needed to show how the resistance varies between this data point and the surrounding high conductance data points.

A point with a high estimate of the local geomagnetic field in a region where it is mainly low can lead to overestimating the surrounding risk, and a point with a low estimate in a region where it is high could lead to under estimating the risk. A 100‐year Geoelectric Hazard Analysis for the U.S. High‐Voltage Power Grid

The voltage that builds up in a once per century solar storm depends on the resistance or conductance of the underlying rock and the length of the power line. A long powerline over resistant rock is most at risk. A short powerline over conductive rock is least at risk. The total voltage is the voltage per kilometer added up over the length of the power line and this is what can damage step up / down transformers at either end of the power line in a solar storm.

This then can be used by power companies to meet their requirements for resilience to once per century geoelectric hazards.

They found areas of both higher, and lower hazard than the 100-year values the companies are currently using.

Paper here: A 100‐year Geoelectric Hazard Analysis for the U.S. High‐Voltage Power Grid

Press release here

There are similar projects underway in many countries around the world to improve their understanding of the effects of these once in a century extreme global storms to build in resilience. For instance the UK is doing a four year three million dollar project called Space Weather Impact on Ground-based Systems (SWIGS) which started in 2017 so should end in 2021.

There are many other projects of a similar nature around the world in countries such as Russia, Japan, New Zealand, many European countries, Australia, Brazil, Canada, Ethiopia, the Nordic countries and several more. For details see:

Then - if you do get a solar storm - you can act quickly and protect your grid, so predictions help too and these vulnerability maps tell companies which power lines they need to pay particular attention to - right down to the detail of - "we have a solar storm coming and it looks like a big one so we need to act to try to do something to protect these particular lines".

As you see from the map most power lines are not at significant risk. Only the bright ones here need particular attention to make sure the attached transformers are resilient. They already have maps like this that they use but this one is more detailed and will help them do it more accurately.


Drag can lead to satellites de-orbiting sooner. There is more drag during solar storms but the satellites won’t fall out of orbit instantly - this is more a case of a shorter lifetime for satellites if they have lots of solar storms in their lifetime.

. Past Solar Superstorms Help NASA Scientists Understand Satellite Risks

Solar storms can delay phase and frequency of GPS signals needing error correction. This could mean glitches for GPS during a storm.

. Errors Correction in GPS System Caused by Solar Activity

During solar storms satellites can get computer glitches from to cosmic radiation - and there are solutions. Computers do still work in space


During solar storms high energy particles can get through the walls of satellites and then they can deposit a charge inside of it. With several of these events static charges can build up and then discharge. Sometimes also charges build up on one side of a satellite and discharge to the other side. These can lead to:

  • Memory glitch - leading to a “soft reboot” after which the satellite is fine again
  • Permanent damage to microcircuits - this can sometimes be fatal to the satellite
  • damage to solar cells - gradually degrading over several solar storms
  • Damage to attitude control system so the satellite can’t stay properly orientaed any more

The damage might not be noticed until after the solar storm. Based on 6000 faults for the Soviet Kosmos satellites, for low altitude satellites (below 1000 km) the peak for malfunctions is 5 days after the sorm and fo rhigh altitude satellites, 2 days after the storm.

See Solar Storm Threat Analysis, 2007 More techy details in this student paper from MIT surveying the topic.


We have many spacecraft in deep space and they, of course, are hit by numerous solar storms without the protection of Earth’s magnetic field.

Also we do get glitches even in low Earth orbit, even indeed at ground level from cosmic radiation.

We have many spacecraft in space already - both within the Earth's magnetic field but outside the protection of its atmosphere like Hubble or way out in space like the Perseverance and Curiosity rovers on Mars.


This is an example of a soft memory glitch. Just one memory location affected.

We sometimes get glitches on Earth too from cosmic rays evern through our atmosphere. This is a video clip of a supermario glitch, due to a cosmic ray event:

(click to watch on Youtube)

This is a replay by a programmer who programmatically flipped the bit at the right moment and duplicated the effect.

(click to watch on Youtube)

"During the race, an ionizing particle from outer space collided with DOTA_Teabag's N64, flipping the eighth bit of Mario's first height byte. Specifically, it flipped the byte from 11000101 to 11000100, from "C5" to "C4". This resulted in a height change from C5837800 to C4837800, which by complete chance, happened to be the exact amount needed to warp Mario up to the higher floor at that exact moment. This was tested by pannenkoek12 - the same person who put up the bounty - using a script that manually flipped that particular bit at the right time, confirming the suspicion of a bit flip."

. How An Ionizing Particle From Outer Space Helped A Mario Speedrunner Save Time


The simplest solution to issues like this is to add something called a "parity bit" to check to see if any bits have been flipped. It's a simple idea - let's look at just four bit numbers for simplicity.

In a binary number such as, say, 0011 (binary for 3) the parity bit is set to 1 if there is an odd number of bits, and 0 if there is an even number. In this case it would be 0. So it would be stored as [0]0011. If, say, the first bit was flipped, [0]1011 turning 3 into 11, then the parity bit will mis-match the number which should be [1]1011. So the computer detects an error and restarts the calculation or shows an error message or recovers from it as gracefully as possible.

Supercomputers have to do this, the Cray-1 recorded some 152 parity errors in its first six months. That's about one on average per day.

As supercomputers got faster and more complex with more memory and more processing power, they made more errors like this.


Parity bits aren't good enough, if you've been calculating for a day and then your program crashes because of a parity bit and you have to start again. Indeed with modern supercomputers with large amounts of parallel processing, there are many parity bit errors every hour.

So nowadays they use something more sophisticated, error correcting code, which makes the data nearly an eighth longer, but it lets you actually correct single bit errors. ECC can also detect (and sometimes correct) simultaneous flip of two bits. It can't detect /correct simultaneous flips of 3 bits but this will be very rare.

Background to ECC here:

. Evaluation of Error-Correcting Codes for Radiation-Tolerant Memory

Supercomputers also automatically save the calculations so that if you detect an error you can't recover from, you go back to the last saved state of your program.


It's not just the fast neutrons from cosmic rays impacting directly, it's also thermal neutrons - those fast neutrons slowed down and then they can hit a boron nucleus and turn it into a lithium nucleus giving off an alpha particle. This turns out to cause many more bit flips than the fast neutrons.

The average GPU could have a bit flip every 3.2 years (because it has so many bits in it).

If you translate this to self driving cars they calculate:

When it comes to cars, with roughly 268 million cars in the EU and about roughly 4% – or 10 million cars – on the road at any given time, there would be 380 errors per hour, which is a concern.

. Cosmic challenge: protecting supercomputers from an extraterrestrial threat – Physics World

So if we do have widespread self driving cars in the future, we may need protection from this, much as for supercomputers.


All this is especially important for spacecraft because they don't have the protection of Earth's atmosphere and many don't have the protection of its magnetic field either.

Anything from CubeSats all the way to big exploration rovers - or the ISS, needs to constantly check for and correct memory errors.

Generally they have error correction codes but they also have triply redundant memory and if a blip flips in any of the three locations the data was stored to, it's restored to the majority vote value (this is in addition to the error correction codes if they don't fix it).

This is a summary of how it works for CubeSats

Error-Correcting Code Memory

Error-Correcting Code (ECC) memory is capable of detecting and correcting bit errors in RAM and flash memory. In general, ECC works by storing a checksum for a portion of the memory. This checksum can be used to simply mark a portion of memory unstable. Additional processing can use the memory and checksums to correct single and sometimes multi-bit errors. The memory controller is responsible for managing the ECC memory during read and write operations (28).

Software Error Detection and Correction

Bit errors can be detected and corrected using software. In general, Error Detection and Correction (EDAC) algorithms use three copies of the memory to detect and correct bit discrepancies. Software routinely “scrubs” the memory, compares each of the three stored memory values, selects the majority value, and corrects the erroneous memory location. Software EDAC can be performed at the bit or byte level. Memory lifetime needs to be considered for software EDAC implementations, since every correction increases the write count to a memory location.

. 8.0 Command and Data Handling

If the program detects a glitch that it can't fix with error correction codes - then normally the software reboots. Just like rebooting your computer if things go wrong as the last thing to try.

So for instance in a solar storm GPS satellites may well detect unrecoverable errors and if so would just reboot.


If it is a serious problem or the storm continues it can get stuck in a bootloop - where it reboots but immediately encounters the same problem, and reboots again over and over. With Curiosity that was a big power drain and threatened to end the mission - so they switched to the backup computer and used it to analyse the problem.

Big satellites have a complete backup computer system on very important missions like the Mars rovers or the Hubble Space Telescope - two entire and separate systems and if one fails they can switch control to the backup system and use it to analyse the problems in the primary system. To deal with this, the Space Shuttle had three backup computers.

This is an example where NASA did that with the Hubble space telescope, had to switch to the backup computer.

. NASA Returns Hubble Space Telescope to Science Operations

This is an example of when the Curiosity Rover had to do that on Mars.

The rover has a pair of identical brains running a 5-watt RAD750 CPU. This chip is part of the PowerPC 750 family, but it has been custom designed to survive high-radiation environments as you’d find on Mars or in deep space. These radiation-hardened CPUs cost $200,000 each, and NASA equipped the rover with two of them.

When Curiosity landed on Mars in 2012, it used the “Side-A” computer. However, just a year later in 2013 (Sol 200), the computer failed due to corrupted memory. The rover got stuck in a bootloop, which prevented it from processing commands and drained the batteries. NASA executed a swap to Side-B so engineers could perform remote diagnostics on Side-A. In the following months, NASA confirmed that part of Side-A’s memory was unusable and quarantined it. They kept Curiosity on Side-B, though.

. NASA Switches Curiosity Rover to Backup Computer Following Glitch - ExtremeTech

So - there are many things we are doing and can do to make satellites more resilient to solar storms.

This includes material from my Debunking: Solar Storms to end all life on Earth which I updated with this new material on the latest research on effects of solar storms on power grids.

See also

For the south Atlantic magnetic anomaly see

See also

For the south Atlantic magnetic anomaly see

. Debunked: Earth’s magnetic field to reach zero this century - no - decreasing at 5% per century, would take centuries but doesn’t resemble field before past two reversals

For magnetic pole shift:

. Debunked: Earth’s magnetic poles to swap and make parts of Earth uninhabitable - NOT

For geographic pole shift (out of date theory)

. Robert Walker's post in Debunking Doomsday