REDEFINING SUCCESS FOR A DISTRIBUTED ENERGY GRID: THE THREE TENETS
In our first “Three Tenets” blog we talked about the importance of speed when it comes to effectively leveraging distributed energy resources (DERs), and in the second one we wrote about the importance of accuracy. In this one we add a third dimension of criticality – scalability. From our perspective, these are by far the top three critical success factors today when it comes to successful DERMS and VPP projects and the determining factors for the long-term viability of these projects as increasingly larger numbers of distributed energy assets find their way onto the grid. There are, of course, other important factors, but many that topped the criteria list during the early phases of DER adoption have been far overshadowed in today’s world by the need for the triumvirate combination of speed, accuracy and scalability.
Why? For all three It all boils down to the power of “more,” especially as it relates to Metcalfe’s Lawand the “network effect.” A network of DER assets increases in value as the number of assets on a network continues to grow. With a big enough network and bidirectional communications, DERs can deliver continuous and precise capacity that supports the grid in a variety of ways that go far, far beyond demand response and traditional load shedding, including ancillary services, renewable energy firming, curtailment and spinning reserves, as well as voltage and frequency regulation – and that’s just the tip of the iceberg.
But in order to continue adding distributed energy assets – tens of thousands or even millions of them – the ability to scale is absolutely essential, as is the need for software that can handle this increase in scale.
Imagine a scenario where flexible loads from 100,000 grid edge DERs are responding in real-time to correct a voltage irregularity on a feeder. Without scalability, this wouldn’t be possible. Just a few years ago, achieving the limitless scalability needs of a highly distributed energy world had many technological hurdles. Today, however, programming tools like Elixir can – and have – given us the wherewithal to overcome these hurdles.
The Elixer Fixer
It is common to see the technology world looking towards the latest innovations to solve the difficult problems of scalability. This can be hugely beneficial as ideas and tools unfold that were not thought possible only a few years before. However, some problems deserve a thoughtful, evolutionary approach that has been battle tested for many years.
In the early 1990s, engineers at Ericsson developed a platform architecture that was designed from the ground up to be massively concurrent, fully distributable to large clusters and fault tolerant in the face of transient errors. A system built with this technology is still the only running example of software to achieve nine nines of availability. This translates to a staggeringly low 31 milliseconds of downtime per year.
That platform, under the collective monikers of BEAM, OTP and Erlang, enjoyed only niche success in domains that required this level of resiliency. That niche status changed a few years ago when a new language was developed for taking advantage of what this platform offers. “Elixir” opened the door to a flood of companies and developers by modernizing the toolchain that developers use day to day and pairing speed of development with the best in class fault tolerance and concurrency that the architecture has always enjoyed.
Enbala chose this tech stack for its platform because DERMS and VPP software must:
- Be available 24*7
- Support massive concurrency of millions of connected assets
- Self-heal in the face of runtime errors
The Telcom Litmus Test
Tested, tried and proven by our friends in the telecommunications industry, this tech stack has eliminated one of the biggest obstacles faced in tackling effective management of distributed energy resources, i.e., being forced to choose between optimizing based on speed of development OR optimizing for extreme scalability and fault tolerance. Before Elixir, development teams had to trade off one to get the other and choose which to prioritize – a difficult choice since both are very important. It was possible to build software that was highly concurrent and allowed distributed computing at large scale – but (and this is a big “but”), it took a very long time to build correctly.
Elixir is the key to building highly concurrent systems. . . in the time frames that businesses today demand. In my view as a chief technology officer who has been driving software development for many years, Elixer is ridiculously productive with an amazing ability to scale.
As proof of that point, when Enbala was evaluating this architecture, we built a version of our platform with Elixir and OTP that supported 100,000 simulated assets streaming 5 second telemetry – all while optimizing that fleet of assets each 5 seconds.
I’ll conclude this blog with a caveat emptor. When you’re look at DERMS or VPP software, everyone is going to tell you that their solution is scalable. But to meet the scaling needs of today’s world, you need to ask whether or not the solutions you are evaluating can:
- Support real time, two communication with millions of devices
- Optimize the use of those devices against unique constraints, all while solving for minimum cost
- Support extreme up-time metrics and self-heal in the face of transient errors
Scalability is fundamental to the global energy transition to a more distributed, decentralized, carbon-neutral grid. Make sure the technology you deploy can meet your needs both today, when your DER network might be small, and tomorrow, when it may well need to control and optimize millions of assets to the furthest reaches of the grid edge.