Editor’s Note – this is the text of a speech provided by Kate Summers at All Energy Australia in Melbourne on Wednesday 26th October 2022
There are times when I think I ought to alter my title to include engineering counsellor as I am frequently called by colleagues and fellow power engineers for a heart-to-heart on what isn’t working in the current connection process, the modelling methodology and application requirements.
Its ten years since Julia Gillard gave her misogyny speech, which broke open the freedom for women to speak out. So indulge me, I am going to invoke my harpie and break open the silence on what is a growing national problem and a major roadblock to Australia achieving its orderly transition to renewable energy.
As a post-menopausal, feminist, LGBTQI+ “she” identified female engineer trained and competent in power systems with close on three decades of practice in an industry dominated by energy market rules and with an eye on the ethics of Engineers Australia, I have to call for a national conversation regarding the current modelling practices used for connection. These are compounding costs, causing significant delays to both generation and customers, impacting on the ability to perform critical upgrades and leading to unsafe outcomes.
The ethics of the profession require an engineer to practice in their area of competence. It is important therefore that majoring in power subjects, such as fields and electromagnetism, rotating machines, power electronics, protections systems, transformer design and control systems is a prerequisite to preparing and approving engineering designs, studies and assessing integration work for the power system.
With that as a starting point, I am sure that this room is full of engineers and industry participants who want to contribute to the decarbonisation of the nation’s electricity supply, transforming the nation to a renewable energy super power (as Ross Garnaut has described), electrify everything as Saul Griffith promotes. We cannot underestimate the size and complexity of the problem. But as we contemplate that complexity, gathering ever larger warehouses of data, while developing volumes of rules on “system services”, it would appear that we ignore the lessons that past engineering can teach us.
So before I see another project spending several years and $Millions to achieve a 5.3.4A letter for a set of GPS that may then have to be altered through a 5.3.9 with endless items in an issues tracker. I want to reflect on past practices to identify what we might collectively alter to improve what we are doing.
The Connections Reform Initiative undertaken by the CEC and AEMO identified an number of areas that could improve in the connection process, however, we are yet to see that deliver meaningful changes across all areas.
The current volume of work required for a connection application is growing by the day. The fluctuations in what is required is a trap for all, consider you are approaching 95% percent completion of a Connection application and any of the following can occur:
- there is a reinterpretation of frequency standard – update studies and report to suit
- an update of the PSS/E version – rerun with recompiled code
- an alteration to a model parameter (repeat studies)
- PSCAD version 5 .. why is your model in 4.6? – redo DMAT after fixing model
- A guideline is republished without warning – redo package to suit
- New rules are published – Don’t you have the right to work on the ones you started with?
The list is endless, this creates a Catch 22.. a situation that is extremely difficult to get out of, and after all that you have invested you require the GPS to reach financial close.
Furthermore, once you have gained your 5.3.4A (and B) letter, with tenuously agreed GPS and so many caveats, while not intended for financial decisions, still subject to triggering a 5.3.9 process if anything changes prior to R1… and yet you are still to undertake your detailed design.
Here is where the process and the engineering practices deviate in a manner that was not intended when the rules were created.
We all know that the connection studies are undertaken on the basis of early design, sometimes preliminary designs are prepared for a tender process, but generally the engineer undertaking the studies starts with a relatively clean sheet. It requires engineering assumptions to develop up what is expected to be connected. The OEM’s model, the balance of plant assumptions, collector cabling, connection point transformer impedances and ratings. The modelling assumptions are therefore subject to the skill and experience of the engineer undertaking the modelling and the check calculations they perform to reassure themselves the model is reasonable. All of the basic design and assumptions have to be taken through the detailed design process – but this comes after Financial Close, and no one gets Financial Close without a Connection Offer.
The 5.3.9 process involves returning through 5.3.4A – it expects any lesser negotiated standard to become automatic, this altered use of 5.3.9 has stopped projects from correcting errors in the modelling assumptions – leading to poor field outcomes, because of the cost and delays of associated with an assessment of a “package”. The due diligence process in any assessment is incentivised to find fault, if this is outsourced, and the consultant is on time (no fixed pricing in DD work) the more issues they find, the more time is clocked up. If it is in house, it may be just the culture or the practice to pursue any error or perceived issue.
A 5.3.9 prior to registration can send a contractor broke through LDs on failure to achieve milestones. The economic consequence of the connection costs will lead to less competition in the Australian market for delivery of projects as fewer companies can tolerate the risk. This is happening at the very time when we require competitive delivery for an enormous number of infrastructure projects. Under the professional ethics, a chartered engineer must consider the economic consequences of their decisions, so where requirements have inflicted unexpected costs without consideration, the action is questionable.
Let’s wind the clock back a couple of decades, prior to cloud computing and terabytes of cheap memory. Contemplate that engineering designed and delivered the modern power system using elegant mathematical methods, slide rules and log tables. If they had a complex problem, they broke it down to fundamental elements and used hand draw charts from their calculations to visualise the characteristics they were studying. These methods exist today within power system programs but it’s like pulling teeth to get a power transfer limit described by a P -V curve. Such tools aid the broader understanding of the operational limits but are rarely presented in reports or connection enquiries.
Past practice was informed and guided by the science of electromagnetism and field theory, the understanding of waveguides, transmission of ac power and the dynamics of rotating machines. In all it took 200 years of scientific discovery to understand electromagnetism and a further 100 years of research and engineering endeavour to work out how to control the forces. The science has not changed, but in less than two decades, we are now in a period in which the market overrides the science and lays legal claim to the control of the system.
Having been present when the original market code was drafted, I know the market was never intended to replace system dynamics and control. It was expected to remain an external economic outcome not a primary controller. Operations now seems to be unsupported as control features are altered to suit market “operability” or the perceived need for centralised “controllability”, this fails to appreciate that electricity travels just below the speed of light and nothing an operator does in hindsight will correct the system if the primary controls are not in place. Any centralised control, computer processed control that must then be communicated to a site is too slow.
Prior to market start when computing was limited, the system engineers tested the generators in the field, filtering the signals, selecting the sample rates, identifying where to measure, what to measure and what sort of step tests would be conducted. The engineers conducted the tests with field technicians making connections. They collected the results, calculated the transfer functions from the frequency sweeps and created the dynamic model for use in system studies. This gave the control engineer intimate knowledge of the unit, ownership of the model and practical field experience. National Grid in the UK still undertake this field testing and model work. It means the model is fit for purpose and designed for use in system studies by the system control engineers.
The established principles for setting control parameters on units, required analysing the settings for a wide range of operating conditions, tuning to find the best damping for local and system modes. It was not tuned to suit a fixed rise time and settling time, if this was done without consideration of other plant it would undoubtedly cause excitation of another unit. My notes include “possible timebomb waiting to happen”, implying possible damage to other units through poorly tuned settings.
There was no fixed response as to how a unit recovered from a disturbance. It was understood that different technologies have different characteristics, steam units differ to hydro units and so too do gas units. They weren’t deemed to be unable to connect, the engineers tuned their controls to ensure that they operated in a stable manner on the system. The largest step injection was designed not to exceed a 5% change to the generator terminal voltage.
The current approach tunes inverter-based resources to a single response, fast and hard regardless of what plant is electrically close. This is an experiment; it is likely impacting other plant we just don’t know about it.
Furthermore, as the model test requirements become more extreme by apply large voltage changes, longer fault durations, large active power step changes, there is a trend to require the same tests on the power system. Step changes larger than any ever before are being required for R2, it is evident that the power system is now the test bed simulator for the model. Such large tests have consequences for customers and for aged equipment, large voltage changes, excessive over voltage or prolonged under voltage all lead to equipment failures.
On this point, it is not uncommon to see a report with results including a non-credible 3 phase 500 kV faults (with CBF) well in excess of the region’s critical clearing time or the published transient stability limit. Such results are presented as a meaningful for a connecting generator. It is as if the rest of the system is irrelevant when GPS performance is assessed, not regional control, not synchronism with a neighbouring state but PSCAD said it worked.
This leads me to conclude the more detailed the model, the less likely the results are reasonable, either that or the interpretation of the results ignores the critical elements of understanding the power system, as the whole of system is not considered when assessing an individual connection.
The detail and complexity of the model is such that is not easily adjusted, easily broken and as such has limited use when there is a need to examine widely the interactions of control systems for different operating conditions.
There are better ways to study a complex system than programming everything.
I’m going to recommend reading Johann Hari’s book “Stolen Focus” as it provides insight to the world of distraction that is getting in the way of the deeper thinking necessary to solve the transformation of the power system. As we debate market services, interact on LinkedIn, read determinations, write submissions on proposed rules, are we focussed and concentrating at the level required to actually understand the system and resolve the problem?
A few provocative thoughts:
Just because it’s possible to code up automated studies producing 1000’s of results is it necessary?
Have the results really been understood and interpreted, or were they just judged to fit the rule requirement?
The inclusion of protection within the dynamic model, alters the model purpose – is it a study of the system capability and response or of the protection settings? The dynamic model for a synchronous machine is used to inform protection settings not the other way around. There’s an exception for power electronics as the protection of the IGBT is required, otherwise it would be possible to simulate unrealistic results where in reality the device will fail.
Is the purpose of the detailed computer model really to prepare a forensic model to be used for legal compliance rather than for the primary purpose of system studies and as a tool to improve the engineering insight of the control of the power system?
In conclusion it occurs to me that computing everything into software is destroying the ability of engineers to solve complex problems using first principles, known methods and fundamental understanding of the science.
The quest for detail and the expectation that the results accurately represent everything fails to acknowledge that there are limitations to what can be reasonably represented in the mathematics. It is incumbent on the engineer to understand the science, the physical forces and limits of the equipment and the control systems used to manage the forces.
An electrical engineer must understand the electromechanical interaction on the shaft of a machine and when the electrical properties change, the induced torque changes and knows that shaft fatigue is not necessary for system studies.
A network model that takes hours to solve will not be of help if decisions must be made quickly. It’s time for a reality check to rationalise the number of models, their use and purpose, to return to the model being used for simulation as a tool to inform, rather than it being a source of all truth.
We must also urgently reduce the cost of connection packages and even up the information transparency regarding network planning.
Please, now that I have started it, let us continue this national conversation in that hope that common sense may prevail.
About our Guest Author
|Kate Summers is a Fellow of the Institute of Engineers, and an experienced power systems and control engineer with extensive electrical experience, market and regulatory knowledge. She is passionate about renewable energy and dedicated to bringing about an orderly transition to a low carbon future. Her broad engineering knowledge has been gained over 28 years of engineering practice covering a wide range of practical field experience, power system analysis, transmission planning, operational control, regulatory compliance and project connection negotiation.
In 2020 Kate was jointly awarded National Professional Electrical Engineer of the Year for her work identifying the root cause of the deterioration in system frequency.
Kate’s recent focus has been on control philosophy, shedding light on the unintended consequences of market based decisions in respect of control theory, the loss of power system engineering practices and the escalating complexity of regulation imposed on engineering.
Current modelling practises are overly complex, devoid of clear purpose and extend beyond sound use of the mathematical methods. Computer models are a tool to aid engineering interpretation of the power system. Kate is an advocate for stepping back from detailed power system modelling to understand complex problems from fundamental principles aligned with power system control philosophy.
You can find Kate on LinkedIn here.