Complex systems are inherently unstable – and can become unmanageable?

David Slater      August 13, 2019

Up until after the Second World War, electricity was provided by local sources and Boards rather like water.

Then came the organisation into a national body, the CEGB and the system of central, very large generators (mostly coal fired), linked together in a National Grid, which enabled industry to operate in a much more secure climate, knowing that there was a central controller ensuring the lights stayed on: and reacting to national emergencies such as coal miners strikes and Beckham’s penalties. The nation came to rely on and take for granted, God’s Wonderful Men, as these controllers were called in Wokingham. But as with most systems and especially infrastructure, the grid has expanded, the generators have become more diverse and the interconnectivity more challenging. With the growing number of various interconnectors, the impact of upsets can rebound across continents (The tree fall in the Italian Alps and the Bremen ship canal incidents are classic examples.)

So this inherently unstable system, requiring dynamic control with manual interventions and overrides has become just too complex, for even the GWM’s to get right every time, all the time (an unreasonable demand?)

This incident and the inquiry now called for, is an excellent opportunity to re-evaluate where we are and where we should be putting our development money, and not just settle for convenient quick fixes! Do we want to make it even more complicated with extra trips and “automatic” cascades? Or should we stop and ask what exactly what do we need for the next 100 years?

The answer I would venture to suggest, is to change our thinking. In much the same way as we have seen mainframes replaced / augmented by first personal and now cloud based distributed, hierarchical computing processes, so we could look at building up our local sources and mini and micro grids with a central support system for topping up and balancing supply nationally. (As now, but as a last, not first resort – bottom up, not top down)

One of the problems that causes instability, is the fact that, as we have seen in this latest episode, the grid frequency has to be the same all over the network. Interruptions or failures at transformer linked sub networks have a drastic knock on effect nationwide. It is relatively simple to install demand side response capabilities to our millions of appliances and industrial units such as refrigeration, heating and air conditioning, which automatically and rapidly compensate for frequency swings, but it would also be an idea to explore how we could run a national network of islanded mini grids linked, like the cross channel link by DC, isolating them from frequency variations.

The FLEXIS project in South Wales, for example, is looking to see how we can better and more intelligently (and reliably) manage a local network containing a variety of generating sources and consumers, independently of central interventions. We even hope to have a much more predictable and reliable renewable generating source – the Swansea Bay Tidal Lagoon, as an integral part of this Project.

So come on guys, God’s Wonderful Men have done a fantastic and much appreciated (sterling?) job. But as the picture of the control room (above) suggests, it is getting just too difficult to dynamically manage such an inherently complex and unstable system. Give them a break, let’s update our thinking and approach – change our model to bottom up distributed generation that is inherently stable, such that the islanded mini grids can continue merrily and independently, if necessary: not get knocked offline by the increasingly frequent cascading load shedding across the network.

This is an opportunity not a disaster. When you’re in a hole – stop digging!