The history of system simulations
Today most industries depend on simulations somewhere in their asset lifecycle, whether it be for innovation, design, production, or operational support. However, this was not always the case, as simulation was regarded as a costly niche in engineering for many decades. This article explores the origins of simulation, giving insight into some important historic developments. Then, focus is shifted to system simulations and co-simulations, and their history in the maritime sector. Finally, a few predictions about the future are made.
Origins of computing
The origins of numerical computing far preceded the invention of the digital computer. The ancient Greeks developed the Antikythera mechanism around 200 BC for the purposes of astronomical calculations. Various mechanical machines for numerical computation were invented between the 17th and 19th century, perhaps the most famous of which was Charles Babbage's Analytic Engine, which could be "programmed" by means of adjusting gears and levers. The first analogue computers evolved in secrecy during World War II, playing a crucial role in advances in navigation, missile ballistics, and most notably, the decryption of the Enigma codes by Alan Turing's code-breaking computers.1
Simulation then also has its origin in the ancient world. As Plato's allegory of the cave alludes to, simulation is a form of thought experiment - a shadow of the real world projected on the cave wall. We won't be dealing with philosophy here, but let's dwell briefly on one early experiment - known as Buffon's needle.2 Carried out in the 1770s by French philosopher George Buffon, the goal was approximating the value of \(\pi\) by repeating simple, randomized trials. Buffon drew equidistant parallel lines across a rectangular board. He then threw needles, each of the same length, randomly onto it. Having already worked out through calculus and probability theory that the probability that a needle would cross any line was \(2/\pi\), all he now had to do was repeat the experiment many times to estimate the "true" value of \(\pi\). This approach - repeating a random trial to approximate an uncertain quantity - is today known as a Monte Carlo method, though that term would not be coined until the 1940s. These methods have been used since well before modern terminology arrived and are especially useful for modelling problems with large input uncertainties - such as in risk analysis. Note that this approach is not a numerical simulation - it is an example of random sampling - given that the location of a dropped needle is not predetermined. To turn it into a simulation, we need a model that can describe the position and orientation of a needle.3
A simulation of Buffon's needle (source: wikipedia).
Replacing the physical experiment with a numerical model is necessary if our goal is to study this problem in depth. We can then replace the act of throwing and counting needles with drawing uniformly distributed samples from the state space of our needle simulation and explore different outcomes by varying the parameters of the model, such as how hard the board material is, or what height we drop the needles from. With a numerical simulation there is no need to count needles, or acquire different types of boards, as long the model is an accurate representation of the real world.
The first computer simulations
Before the advent of the analog computers of the 1940s and 50s, these numerical models would still have to be solved by hand, which is not very efficient. Neither were the simulation efforts on the vacuum tube-based computers of this era. Those computers hardly resemble anything we are familiar with today - large, bulky, expensive, and very prone to developing strange problems. As an example, the word bug in computer software evolved from one such problem - bugs would literally get stuck in the tubes, causing electrical interference. The layout of the hardware would also have to be changed to mimic the equations under study - and as a result, simulation models took too long to develop, and even if results could be produced their applicability were ambiguous due to a lack of methods for verification and validation. One example of this involved an attempt to model peak loads for telephone systems. A team consisting of a mathematician, a systems engineer, and an assembly programmer attempted to use a discrete event simulation for this problem, which was analytically difficult because it did not conform to available queueing theory at the time. In the end, they spent twice the budget, took twice as long, and accomplished less than half of what they intended.4 This tendency did not go unnoticed by operations management of course, and thus simulation was treated as a method of last resort, with empirical or analytical methods being preferred instead.
The arrival of the first digital computers in the 1960s would also bring the first general-purpose programming languages, such as FORTRAN and ALGOL. Semantically these languages focused on numerical computing and describing algorithms, not simulation. The modern conveniences of the internet and open-source software were a few decades away, and there were no common tools for simulation available yet. Re-creating necessary functionality from scratch, such as random number generators and numerical solvers, quickly became a bottleneck in simulation projects. This led to the emergence of Simulation Programming Languages (SPLs) that began to specialize in simulation by offering these tools as an integrated part of the language.
There were several SPLs in circulation, but SIMULA deserves a special mention as it is widely considered to also be the first object-oriented programming language. First developed as a superset of ALGOL in 1962 by researchers Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Centre, by the late 60s it had evolved into a general-purpose language, introducing concepts such as objects, classes and inheritance that have remained central to software development to this day.5 Some other concepts that have had major influence in computer science also originated with these SPLs. It was again SIMULA that introduced the concept of a process as a system entity that communicated with other processes, which contributed to the design of process separation in the first operating systems for home computers. SIMSCRIPTs descriptive object relationships were co-opted by the computer science community into the first relational databases. While these developments were important, they did not quite bring simulation into the operational mainstream. It was still seen as a method of last resort, downplayed as experimental and expensive. This was aptly illustrated during the second conference on the applications of simulation - in New York in 1968 - where the first agenda item was simply stated as "Difficulties in convincing top management".6
Dahl and Nygaard.
Still, during the 60s numerical simulations were becoming more commonplace in some industries, particularly within the military, aviation, and space domains.7 During the Apollo space program NASA pioneered the techniques that led to hardware-in-the-loop (HIL) simulations, which we will come back to later. Numerical computer simulations were applied in airframe designs, such as the once ubiquitous Boeing 747. Comparing the means of Boeing and NASA at this time with most other companies would hardly be fair, however. These were the exceptions.
It should be noted that DNV also has a long history in numerical computation and simulation-based services. In 1969, DNV purchased what was at the time the largest computer in Norway - using it to pioneer analytical, scientific approaches to risk management and ship building practices. Simulation has been, and continues to be, used in many DNV services, such as novel digital and data-driven assurance efforts.8
Maturing the field
Despite these difficulties the field of simulation was maturing by the mid 70s, through engineering school curriculums, an increasing number of conferences, and heightened research focus. Panel discussions were held to share common pitfalls and failures, and simulation courses for professionals were being organized.7 Simulation was also gaining traction in engineering, but the two common fears still lingered:
- Simulation is very complicated, so only experts can use it.
- Simulation is too expensive, due to the time it takes to produce grounded, applicable results.
These fears were not without justification. Lacking visual tools for model development - they still had to be "hand-coded" - and methodology for interpreting and verifying the simulation results, simulation projects were often prone to bugs and problems that hampered their progress. Things were soon going to change, however. The rapid growth of computing power in the 90s, where the computing power of consumer chips doubled every couple of years, as popularized by Moore's law, had an unlocking effect in all areas of numerical computation. By the mid 90s, computations that would take days on bulky office hardware from the 80s, could be completed in a few minutes on a home computer. Professional software suites for modelling and simulation followed suit, such as MATLAB and its visual modelling companion Simulink, Wolfram Mathematica, Modelica, LabView, and many others.
Emerging standardization
By the late 90s, the growth of home computer technology was being shadowed by an equal growth in industrial microelectronics, and in the automotive industry, computer chips were making their way into cars. Car engines were also improving through novel, simulation-backed designs. The European automotive industry was already an integrated industry - where a network of suppliers would provide parts for the engines, and the car manufacturer assumed responsibility for final assembly as a system integrator. A new problem was emerging - simulation models created with different tools were not compatible. To produce an integrated simulation of the entire engine, each supplier needed to supply a model for their component, all of which had to be interoperable on a signal exchange level. Protecting Intellectual Property (IP) rights were also a concern, with some suppliers being reluctant to share their models, as that entailed also sharing the source implementation, thus risking to expose design details.
This problem went beyond the details of different file formats and implementation styles. It exposed a need for modularization of models through standardization of the model interface, which was addressed in the EU-funded research project MODELISAR (2008-2011)9, originating from the European automotive industry. MODELISAR introduced the concept of the Functional Mock-up Unit (FMU), and the Functional Mock-up Interface (FMI) for a model and its interface respectively10. With FMU providing a common way to package models, and FMI as a common interface standard for connecting them, this approach quickly caught on also outside the automotive industry. Today, the FMI standard is the preferred standard for co-simulation, which is the most common form of simulation for connecting different models together in a structured system, and is now supported by well over 200 different simulation tools.
This kind of systems simulation is different from other types of simulation, such as Finite Element Method (FEM) or Computational Fluid Dynamics (CFD)-based variants, that do not exhibit a need to connect different models. They typically employ advanced numerical solvers to simulate physical effects, such as fluid characteristics or bending loads, accurately over large geometric meshes. The solvers used in co-simulations are simpler, since the models are usually based on sets of ordinary differential equations. This makes co-simulation computationally efficient by comparison. One example in which performance is critical is connecting simulations with control system hardware in HIL testing, as the simulation then has to follow real-time clock speed. Additional simulation modeling types are discussed in DNV-RP-051311.
Co-simulation is a distributed form of simulation, where a larger system is broken down into its component models. The modelling of each component can then be handled by the people - or the organization - most suited to the task, such as a particular equipment supplier. Each model can thus be developed independently, and act as a black box with respect to the other models. It is the responsibility of a coordinating co-simulation algorithm to synchronize the models in time, based on the data exchange model laid out by the FMI standard. This interoperability not only solves the system integration problems described above, but also enables a degree of collaboration across disciplines and organizations. An example diagram of a co-simulation using FMUs is shown below, with three models from different tools connected using FMI and the Open Simulation Platform (OSP) simulation engine.
An example co-simulation using FMUs.
The first versions of the FMI standard provided only a low-level interface for signal exchange with no semantic meaning attached. While the encapsulation provided by FMU adds both structure and a layer of IP protection, and using FMI guarantees that data can be exchanged, neither makes any guarantees of correctness. Signals can easily be connected incorrectly, or a variable from a black box model output in an unexpected unit scale. Both can lead to strange simulation problems that are difficult to debug with no prior knowledge of the root cause. The fast adoption of FMI and co-simulation brought to light these higher-level coupling issues and more, and while we won't dive into the details here, interested readers can consult 12 and 13.To ensure that the FMI standard stays relevant, the Swedish non-profit Modelica Assocation14 assumed responsiblity for its management. An advisory board was established, comprised of a large group of industrial users, in which DNV is also participating. These groups continue to coordinate development of the FMI standard, which is now up to version 3.0.
Co-simulations in the maritime industry
Unlike the automotive industry, where the car manufacturer acts as system integrator, the maritime industry has traditionally not had a clear integrator role. Like the automotive industry, the same problems found there by the early 2000s had also crept into the maritime domain by the 2010s. Ship systems were becoming increasingly software driven, demanding time-consuming and complex integration between suppliers. Simulation-based techniques such as HIL testing have been employed in both automotive and maritime (and more!) industries to address these issues. In HIL testing, the real control system hardware is controlling a numerical simulation of the physical equipment. This simulator-based approach facilitates early testing of both the software components and their interfaces, reducing costly onboard system commissioning time, and enables exploration of scenarios that would be too risky or expensive to perform with the real asset. However, it also requires access to said control system hardware, which has proved to be a significant constraint on scaling HIL. The prohibitive cost of duplicate hardware setups means there is usually only one available, provided by the equipment supplier, and time "in the lab" becomes precious for many activities beyond HIL testing. Services are offered both from third parties and directly from suppliers, and in 2014 DNV acquired the third-party HIL testing company Marine Cybernetics, which provided testing services for many maritime systems. The first HIL test for a Dynamic Positioning system was performed by Marine Cybernetics in 2004, introducing HIL testing to the maritime sector. Ten years later these services had been applied to many other systems such as power distribution, emergency management, subsea, crane, and drilling, but all faced the same issues with scaling.15
With these scaling problems, HIL testing alone is not sufficient to address the growing software complexity issues within the industry. A HIL test is typically performed once for each system delivery - close to the onset of commissioning and with stakeholders witnessing - not unlike the Factory Acceptance Test (FAT) regime which has been standard fare in equipment deliveries for decades. But while hardware rarely changes once it is built, the software changes frequently, and chances are high that it has changed several times before it is deployed to its intended onboard system. A shift to enable early integration and frequent testing of software is needed, and with this as a backdrop, the Open Simulation Platform initiative was formed as a partner agreement between DNV, Kongsberg Maritime, SINTEF, and NTNU in 2017. Shadowed by a Joint Industry Project with around 20 additional industry partners, the project set out with a grand vision - to establish a collaborative ecosystem for designing, operating, and assuring complex, integrated systems in the maritime industry. More projects in the same vein have followed since - such as the DNV-led and Norwegian Research Council-supported project Digital Twin Yard 16, which conluded in 2021. The latter also contributed to the development of the Simulation Trust Center (STC), which provides a software-as-a-service approach to secure, cloud-based collaborative simulations, hosted by DNV.17 The OSP key properties of collaboration, re-use of models, IP protection, co-simulation, common standards, and open-source tools have formed a collaborative foundation in several other industry projects. Like FMI, the Open Simulation Platform has its own steering committee, is actively maintained, and several developments from recent research projects have been made available through the open source toolset.18
The key properties of the Open Simulation Platform initiative.
Conclusion
As we have seen, simulation has been transitioning away from two major bottlenecks - lack of computational power and lack of standardization - but perhaps we have recently hit another; a lack of common ground in our knowledge and tools. Today's simulation efforts can be very efficient, but rely heavily on the combination of experienced simulation practitioners and domain knowledge, and engineers with both are lacking in numbers. As a result, simulation development tends to be siloed and specific, rather than integrated and generic, hampering collaborative efforts even within the same organization. It is our view that this is slowly changing however, so let's end this article by stating a few predictions about the future.
- Systems simulation will necessitate collaboration: The increasing complexity of systems highlights the importance of more collaborative development processes and shared understanding. There will be increased focus on building common ground, from requirements specification (for example integrating systems modelling approaches such as Model Based Systems Engineering with simulations), to integration and testing across organizations. Simulation will be increasingly used throughout the entire product lifecycle, streamlining the development process.
- Simulation will shift left: Simulation will be employed early and strategically, so that non-engineers (such as managers and sales people) can have higher confidence in the viability of their projects and business models. The assurance role will shift left in turn, emphasizing the value of early problem detection backed by simulation-based testing methods and information delivered continuously to decision makers. Frameworks for continuous assurance will be developed to provide assurance services on demand, rather than by appointment.
- Simulation will shift right: Similarly, simulations will also become increasingly important for deployment and operations, with simulation-based digital twins providing forecasting for decision support systems. For autonomous systems, simulations will be essential in the operational phase to supplement physical data and provide accurate predictions about the health and behavior of the system.
- Simulation will leverage AI: AI-based methods will be used to aid simulation projects in many ways, and we can only fit a few examples here. Generative AI can potentially be used to generate reliable validation data that is otherwise hard (or just very expensive) to come by. An AI agent may trigger on a design change to automatically explore the new design space through simulations, without the need to manually configure them. Methods for learning models from data may replace computationally expensive models with efficient, lookup-based surrogate models. An AI agent may be used to configure appropriate plots and visualizations for the simulation results, based on its knowledge about the data and the domain.
With this in mind, we can safely say that today simulation has evolved from the method of last resort, to the method of choice for problem solving in engineering. But we can also say that while the two common fears mentioned earlier have certainly diminished, they have not quite gone away. Simulation is still a bit too complicated, and results are still a bit too difficult to apply. But if we continue working on improving our shared methods and tools, these fears will, at the very least, continue to diminish.
More blog posts on Simulations and FMU models
- Creating FMU models using C++
- Creating FMU models using PythonFMU and component-model
- Creating FMU models from machine learning models
-
Ifrah, Georges (2001). “The Universal History of Computing: From the Abacus to the Quantum Computer” ↩
-
Wikipedia. "Buffon's needle problem" https://en.wikipedia.org/wiki/Buffon%27s_needle_problem ↩
-
Ventrella, Jeffrey. "Approximating pi with Buffon's Needle." https://www.ventrella.com/Buffon/ ↩
-
University of Houston. "Introduction to Modelling and Simulation Systems: A Historial Perspective." https://uh.edu/~lcr3600/simulation/historical.html ↩
-
Nygaard, Kristen. Dahl, Ole-Johan. "The development of the SIMULA language" https://doi.org/10.1145/800025.1198392 ↩
-
Winter Simulation Conference, 1968. "Proceedings of the second conference on Applications of simulations". https://dl.acm.org/doi/proceedings/10.5555/800166 ↩
-
Richard E. Nance, Robert G. Sargent, (2002). "Perspectives on the Evolution of Simulation. Operations Research 50(1):161-172." https://doi.org/10.1287/opre.50.1.161.17790 ↩↩
-
Paulsen, Gard. Andersen, Håkon W. Collett, John Petter. Stensrud, Iver Tangen. "Building trust, the history of DNV". ISBN-10, 8280712569. ISBN-13, 978-8280712561. ↩
-
The MODELISAR Project. https://itea4.org/project/modelisar.html ↩
-
The FMI standard. https://fmi-standard.org/ ↩
-
DNV-RP-0513. "Assurance of simulation models". https://www.dnv.com/digital-trust/recommended-practices/simulation-models-assurance-dnv-rp-0513/ ↩
-
Open simulation platform. https://opensimulationplatform.com/co-simulation/ ↩
-
Cláudio Gomes, Casper Thule, David Broman, Peter Gorm Larsen, Hans Vangheluwe. "Co-simulation: State of the art." https://doi.org/10.48550/arXiv.1702.00686 ↩
-
The Modelica Association. https://modelica.org/ ↩
-
DNVGL acquires Marine Cybernetics. https://www.dnv.com/news/dnv-gl-acquires-marine-cybernetics-6171/ ↩
-
Forskningsrådet, Prosjektbanken. "Digital Twin Yard (DTYard) - An ecosystem for maritime models and digital twin simulation". https://prosjektbanken.forskningsradet.no/en/project/FORISS/295918 ↩
-
Simulation Trust Center. https://www.dnv.com/services/simulation-trust-center-collaboration-platform-207515/ ↩
-
Open simulation platform's open-source tools on github. https://github.com/open-simulation-platform ↩
Points of Contact
Senior Researcher, DNV
Group Leader and Senior Researcher, DNV