Reductionism vs Systems approach
This essay contrasts two frameworks for assuring the safety of complex engineered systems: systems thinking and reductionism. Reductionism attributes unsafe behaviour in such systems to individual component failures but overlooks interactions among interdependent constituents in complex systems. By contrast, for complex systems, the systems approach - grounded in the CESM metamodel - explicitly models emergent behaviour arising from interactions among interdependent constituents and the system's environment, enabling more adequate safety assurance. This more adequate safety assurance stems from the systems approach's use of multi-level abstraction, which organizes the system into hierarchical tiers, each analysed across the dimensions of Composition, Environment, Structure, and Mechanisms (CESM). By analysing emergent behaviour across these dimensions and tiers - up to system-wide interactions - this approach predicts unsafe outcomes arising from interdependencies, non-linear dynamics, and environmental factors, not just individual component failures. The essay argues that while reductionism suffices for simple, decomposable systems, the systems approach is imperative for complex systems, where its ability to model emergent behaviour and interdependencies provides rigorous grounds for well-justified confidence in the system's behaviour - a capability beyond the reach of reductionism.
1 Reductionism
Imagine a haystack as a metaphor for the behaviour of a complex engineered system. Furthermore, consider a needle in this haystack as a metaphor for a scenario, a situation, or a triggering event that results in the complex system behaving unsafely.

Traditionally, the triggering event, or the assumed nature of the needle, is that a component failure1 causes unsafe system behaviour. Therefore, methods based on reductionism, such as Failure Modes and Effects Analysis (FMEA) and Failure Modes, Effects, and Criticality Analysis (FMECA), have been used to analyse the effect of component failures.
The strategy, therefore, is to search through the haystack for needles.

The challenge, of course, is to know where to look and to recognize a needle when you see one (i.e., understand that a particular failure mode of a particular component results in unsafe system behaviour).
Methods based on reductionism are not based on behavioural system models; therefore, such methods provide very little intrinsic help to the practitioner. The effectiveness of an FMEA relies on the practitioner’s experience, that is, old knowledge of where the needles are typically found – ‘so let’s look there’.
There are at least four problems with such methods:
-
When the haystack gets larger and larger – increased system complexity;
-
When the haystack changes form – novel technologies;
-
Looking for just one kind of needle – the assumption that only component failures cause unsafe system behaviour;
-
When a number of needles have been found – uncertainty about whether all have been identified.
When system complexity increases – that is, the size of the haystack increases – the number of needles tends to also increase, and there are many more places they can ‘hide’.

In highly complex systems, the challenge isn’t just finding needles – it’s knowing where to look. Without systematic guidance, practitioners rely on experience alone, turning the search into an overwhelming guesswork. Some industries combat this guesswork with more guidelines: thicker manuals, longer checklists, and ever-expanding prescriptions, resulting in an unmanageable corpus of ‘best practices’. In our metaphor, this ‘corpus of best practices’ is just a log of past finds – it’s a list of hiding spots where needles were once found. But the haystack’s shape changes: new needles emerge where no one has looked before.
When the haystack is built on novel technology – such as AI – or its very structure shifts, practitioner experience becomes a liability. The needles aren’t just hidden; they’re in territories no one has mapped. This is where reductionism fails: it assumes the haystack is static, the needles finite, and the map complete.

Reductionist methods lack a fundamental model of system behaviour – so they offer no guidance for finding needles. Without it, practitioners grope in the dark, especially as needles hide in uncharted locations. Guidelines do update with experience, but society can’t tolerate a ‘learning period’ where accidents reveal the haystack’s blind spots one by one.
Reductionism assumes unsafe behaviour stems from a single type of needle: component failures. But systems can fail even when no part fails2 when interactions between constituents create hazards no one anticipated. These interactions – whether unplanned, unanticipated, or even planned but misunderstood – become new kinds of needles.

Because reductionist methods fixate on component failures, they overlook these interaction-based needles entirely. The critical distinction isn’t between causes the practitioner has seen before and those he hasn’t – it’s between what reductionist methods can reveal (component failures) and what they’re blind to (emergent interactions). To expose a method’s blind spots – such as its inability to identify interaction-based needles – we must examine the system model it assumes. Because reductionist methods lack a system model, their reach is inherently limited. Left to their own devices, they default to two crutches: the practitioner’s past experience – or those ever-expanding checklists that only document where needles used to be.
Under reductionism, scenarios for unsafe behaviour – and their causes – are largely derived from where practitioners have found needles before. This means confidence in having found all needles rests on a fragile assumption: that past experience can predict where needles will hide in future haystacks, even as those haystacks take on new and unfamiliar shapes.
A practitioner defending this approach typically points to two things: the sheer number of places they’ve checked and the perceived similarities to past projects. Yet this argument contains two critical flaws. First, its strength depends entirely on how much we trust the practitioner’s judgment – not on whether the method itself is capable of finding all possible needles. Second, claims of similarity often lack rigorous validation, especially with novel technologies, where past experience may not apply at all.
2 Systems approach
Let’s continue with our ‘needle in the haystack’ metaphor – this time through the lens of the systems approach. The goal remains the same: find all the needles, from the familiar to the emergent, in haystacks of different shapes and sizes. But the critical question shifts from ‘Where have we found needles before?’ to ‘How can we ensure we’re not missing the ones that matter most?’ The difference lies not in the what, but in the how – and the confidence we can place in the answer.
The systems approach incorporates concepts that, in principle, are both necessary and sufficient to explain and predict the behaviour of any engineered system. This system behaviour – the dynamic interplay between system components and between the system and its environment – determines whether the system remains safe.
In assurance, the focus must shift from component reliability to system behaviour. This behaviour must be the topic of the knowledge generated; and the rigour of its justification scales with the risk.
A system’s behaviour emerges from two sources: interactions among its constituents and interactions between the system and its environment. The systems approach is designed to capture this emergent behaviour – not just as an observation, but as the foundation for predictive modelling. These models represent behaviour across multiple levels of abstraction, reflecting how unsafe behaviour can arise at any system tier, from component clusters to the system as a whole.
The behaviour of any system is modelled using four interconnected dimensions:
-
Composition (C): the system’s parts or components;
-
Environment (E): external items and conditions that interact with the system;
-
Structure (S): the relationships among components and between the system and its environment;
-
Mechanisms (M): the processes driving the system’s behaviour.
Together, these dimensions form the CESM metamodel – a framework not just for describing systems, but for predicting how they will behave.
A model is a map of what it represents – just as a terrain map models a landscape in two dimensions. In our ‘needles in the haystack’ metaphor, the haystack is the complex engineered system, and the needles are the scenarios, triggers, or events that lead to unsafe behaviour. To find these needles, we need a map of the haystack: a model that reveals where to look, based on its Composition, Environment, Structure, and Mechanisms (CESM).
The CESM metamodel is a tool for crafting maps – each tailored to a specific haystack (i.e., a unique engineered system). Just as different terrains require different maps, different systems demand CESM-based models that capture their distinct Composition, Environment, Structure, and Mechanisms. The CESM metamodel doesn’t represent any single system; it’s a framework for generating system-specific models – hence, a metamodel.
Modelling a complex system isn’t straightforward: a single map can only capture so much. The causal factors for unsafe behaviour – our ’needles’ – are too diverse for one model alone. Instead, we need a set of models, each revealing different types of needles hidden in the haystack.
Consider how a landscape can be represented by many different maps: one showing terrain, another municipal regulations, others depicting cables and pipes, biodiversity, walk paths, tourist attractions, or traffic patterns. Each map serves a distinct purpose, and depending on the decisions to be supported, several may be informative at once. Similarly, we may need different models – each tailored to highlight specific aspects of the haystack – so that together, they provide the comprehensive insight needed for robust analysis.
A complete set of system models must address all four dimensions of the CESM metamodel:
-
Composition (C): modelled through the system’s objects (e.g., components and subsystems);
-
Structure (S): modelled through agents and controllers (e.g., decision-making entities and their interactions);
-
Mechanisms (M): modelled through functional processes (e.g., how inputs transform into outputs);
-
Environment (E): modelled as containing other (complex) systems, and therefore needs to address the CESM metamodel3.
The CESM metamodel doesn’t guarantee we’ll find every needle – that depends on the rigour of the assurance effort – but it does guarantee that assurance has the right kind of maps: a set of models incorporating all the properties needed to identify every kind of needle, from component failures to emergent interactions. While the metamodel provides the framework, the practitioner must develop specific models for each haystack – or each class of haystack – to ensure critical needles can be found.
Imagine flying over a field of multiple haystacks. From a distance, the haystacks may be difficult to separate from each other; they look like one gigantic haystack. By closing in on the field, we can differentiate between the haystacks, and going even closer, we can see the straws. This can act as a metaphor for the levels of abstraction. Depending on what we want to know about a system, we need to model it at different abstraction levels4. One level is no more correct than another; they are just different and depend on the kind of knowledge we are after5. Say we want to know if there are haystacks on a field; we can fly over; we don’t need to count them. If we want to count them, we need a closer look. The knowledge in each of these cases is not better or worse than the other; it’s just different.
Thus, the CESM metamodel must be applied across all relevant abstraction levels – ensuring the system models generated are tailored to the specific epistemic needs at hand.
3 Analysis and conclusion
Using the CESM metamodel, we can create dedicated maps for any haystack – regardless of its size or complexity. The metamodel’s flexibility allows us to tailor models to the abstraction level needed ensuring they are fit for purpose.
But where CESM models are fit for purpose, reductionism fails as haystacks grow. Because it lacks any system model foundation, it cannot systematically relate to abstraction levels – let alone use them to handle complexity and find needles. Without this foundation, there’s no structure to build abstraction upon, only a scatter of unrelated straws.
Without a map, reductionism starts in one location and expands outward into an ever-larger volume – but only where it has searched before. When the haystack’s shape changes (e.g. with novel technology), these familiar paths vanish. With no structure to guide it, the search becomes blind, and needles in unfamiliar corners of the growing volume go undetected. But manoeuvring through this growing expanse – without structure or abstraction – only increases the risk of missing needles, especially as they hide in ever more intricate corners of the haystack.
The CESM metamodel, however, adapts to the haystack’s shape – regardless of the technology involved, whether AI or otherwise. Its technology-agnostic framework ensures the models remain fit for purpose, no matter how the system evolves. Critically, it captures all system aspects –necessary and sufficient – to model a complex system’s behaviour.
Unsafe behaviour arises from behaviour – and while its triggers may include component failures, the path to hazard always depends on interactions within the system and between the system and its environment. A failure alone is rarely sufficient to topple the haystack. Instead, the needle’s effect must propagate through the haystack’s layers:
-
In the Composition (C), where the arrangement of components creates inherent vulnerabilities (e.g., weak spots in the haystack).
-
Across Environment (E), where it interacts with external systems (e.g., other haystacks or wind).
-
Through Structure (S), where the needle weakens internal bindings (e.g., connections or dependencies).
-
Via Mechanisms (M), where it exploits dynamic processes (e.g., feedback loops or cascades).
Only then does the haystack collapse – a system-level hazard born from cross-level interactions that CESM’s abstraction levels are designed to reveal.
Here, we must pause. Reductionism employs a two-part strategy:
-
Searching for unsafe effects, typically focusing on component failures as the primary cause.
-
Mitigating those effects through component redundancy or high-reliability components.
For simple systems, this approach can be adequate. However, as the haystack grows, both parts of the strategy fail6 in several ways.
Firstly, as the haystack grows – increasing in complexity – the number of potential needles (component failures) explodes. Not all will cause unsafe behaviour, but identifying which ones matter becomes an unmanageable task, because reductionism lacks the capability to prioritize or trace their effects through the haystack’s layers.
Secondly, the placement of needles in the haystack matters as much as their existence. Whether a component failure leads to unsafe behaviour depends not just on the needle itself (failure mode), but on the haystack’s current state:
-
Its internal arrangement (system state/operational mode);
-
Its surroundings (environmental conditions); and
-
How these interact dynamically.
This creates a combinatorial explosion: every needle must be paired with every possible state of the haystack and its environment. As dependencies among components tighten, predicting the system-level effects of a single failure – let alone all of them – exceeds reductionism’s capabilities.
Thirdly, unsafe behaviour isn’t always triggered by component failures. It can also emerge from interactions between normally functioning constituents – possibly in abnormal operational situations. The problem often stems from inadequate system requirements (missing, incorrect, or poorly refined), which are among the most common causal factors for unsafe behaviour. Even when all constituents meet their individual specifications, the integrated system may still behave unsafely. This unsafe system behaviour may go undetected using reductionism because reductionism lacks the capability to validate such emergent behaviour.
A reductionist approach to accident investigation typically stops at some triggering event7 – whether a component failure (e.g., due to poor maintenance) or an operator error (e.g., missed alarms). As a result, the lessons learned are narrowly focused: more redundancy, higher component reliability, additional operator training, or stricter maintenance regime.
While these lessons learned – redundancy, reliability, training, maintenance – may all be valid, they avoid the critical question: Why did this triggering event lead to unsafe behaviour? Components fail. Operators make mistakes. Yet well-designed systems should contain such failures – even when such triggering events defeat the redundancy philosophy. The real gap lies in understanding how a local failure propagates through the system’s abstraction levels – Composition, Environment, Structure, and Mechanisms – to become a system-level hazard.
A reductionist strategy focuses solely on preventing triggering events – assuming unsafe behaviour can be avoided by eliminating all needles. But if a needle remains hidden – whether from component failure or human error – the pain is inevitable when the body lands. The haystack, as designed, lacks the layers to absorb or isolate the threat; the system lacks the structure to contain the failure.

While prevention remains part of a systems approach, its true strength lies in building resilience and robustness8. Instead of hunting down every possible needle in every situation, the system is designed so that triggering events – even when they occur – no longer lead to unsafe behaviour. It’s the difference between jumping into a haystack bare and wearing armour: the threat exists, but the system absorbs the impact.

This robustness emerges when safety is recognized as an emergent property – one that arises from the conceptual interaction between the elements in the CESM metamodel.
Footnotes
-
In this context, a component can also be a subsystem. Another term that can be used in this context is system constituent. ↩
-
Charles O. Miller, a founder of system safety, noted: ‘Distinguishing hazards from failures is implicit in understanding the difference between safety and reliability.’ ↩
-
A simple example: an autonomous ship needs to understand its environment, also known as external situation awareness. In this environment, there are other ships. These ships are systems in their own right. To understand and, to a certain degree, predict their behaviour, which is necessary for safety, we must understand these ships in terms of their system objects, controllers, and functions. ↩
-
There are, in principle, two kinds of abstraction levels: ontological and epistemic. The system model at a specific abstraction level is ontological, while the kind of knowledge we are after is epistemic. Often, one kind follows the other; that is, for a specific kind of knowledge, we need a corresponding system model. ↩
-
A word of caution - Often, we think of ontological abstraction levels in terms of the level of detail. This is often useful and correct; however, sometimes, this thinking limits what the abstraction level fundamentally is. An operating manual of a system is a model of that system used by the operators. A maintenance manual of the same system may or may not be more detailed than the operating manual; it is just different. Moreover, a service engineer may also use the operating manual in his work; some of the system models, here represented by the manuals, may be used for different epistemic needs. Another example is a system simulator; it is a model of a system that can be used for testing purposes but also for training the operators; the same model for different epistemic needs, and one model or epistemic need cannot be said to be more detailed than the other. ↩
-
This is not to say that highly reliable components and component redundancy are unnecessary and irrelevant. On the contrary, reliable components and redundancy can be important for system safety; however, this strategy is not sufficient. ↩
-
A triggering event is often referred to as the ”root cause”. However, deciding what constitutes the ”root cause” tends to be affected by politics and legal considerations. Or highly affected by the capacity of the method used in finding it: ‘You see what you want to see’. If you look for a component failure, you stop the investigation when you found it. Even sometimes, the so-called root cause appears almost randomly selected from several candidates. ↩
-
The reductionist approach’s means to create a robust system is component redundancy; however, in complex systems, this strategy is shown to be inadequate – sometimes necessary but never sufficient. ↩