Bound by context

Toward a Representational Architecture of Reality 

A Meta-Theoretical Framework for Comparing and Translating Context-Sensitive Models Across Domains

Abstract

Scientific paradigms differ in ontology, formalism, and empirical domain, yet all must specify what exists, determine admissible configurations, encode structure in a formal language, and produce observable outcomes. This paper proposes a general representational architecture for modelling across domains, structured in three levels: ontological architecture, representational completeness, and selection and observation. This framework is proposed as a conjectural meta-theoretical structure, with empirical validation to be established through domain-specific applications.

At the ontological level, the Regional Arenas (RA) framework provides a functional decomposition of modelling structure into identity, coherence, representation, and expression. At the representational level, the Axiom of Representational Completeness and the Contextual Action Rule formalise the role of contextual structure (μ) in determining admissibility, inference, and observable outcomes. At the level of selection, the framework captures how context-dependent mechanisms govern observation and inference.

The framework is examined through established modelling systems, including Bayesian inference and active inference, and illustrated through examples from quantum mechanics, artificial intelligence, economics, and legal reasoning. A minimal translation procedure is introduced, together with a formal contextual operator as an illustrative instance to provide initial steps toward operationalisation.

The central claim is that unification can be approached at the representational level through structural alignment enabled by explicit modelling of both system state (ψ) and contextual structure (μ), without requiring ontological reduction.

“Unification” is used in a representational and non-reductive sense, referring to the possibility of comparing and aligning models within a shared structural framework. The paper is intended as a meta-theoretical contribution to the philosophy of science: its components are individually established, but their integration provides a novel structural unification and formalisation of context-sensitive modelling across domains.

Keywords: contextuality; representational completeness; philosophy of science; model alignment; cross-domain modelling; active inference; Bayesian inference

1. Introduction

Scientific knowledge is distributed across disciplines including physics, mathematics, computation, and philosophy. Each domain develops internally coherent models, yet currently lacks a shared representational structure.

This fragmentation arises from differences in ontology, language, and formalism, and from the implicit treatment of contextual assumptions within individual frameworks. Related concerns appear in multiple traditions, including relational quantum mechanics, QBism, situation semantics (Barwise and Perry), and philosophy-of-science accounts of background assumptions (Kuhn, Hacking, Cartwright).

This paper proposes that theoretical unification is best approached as a problem of representational architecture. The present work should be understood as a formal conjecture supported by structural arguments and illustrative examples, rather than as a fully established theory. Neither does it replace existing theories, instead the aim of this paper is to construct a framework capable of translating across paradigms while preserving structural validity. The present framework is both descriptive, in that many successful models already implicitly contain state–context structure, and clarificatory, in that cross-domain comparison becomes clearer when this structure is made explicit.

1.1 Relation to Existing Contextual and Relational Frameworks

The role of context in scientific modelling is well established across multiple traditions. In quantum foundations, relational quantum mechanics (Rovelli) and QBism emphasise that physical states are defined relative to observers or measurement contexts rather than as absolute properties. Similarly, the Kochen–Specker theorem formalises the impossibility of assigning context-independent values to quantum observables.

In logic and semantics, Barwise and Perry’s situation theory treats context-dependent meaning as a fundamental feature of representation, while in philosophy of science, Kuhn’s paradigms, Hacking’s styles of reasoning, and Cartwright’s nomological machines all highlight the role of background assumptions and domain-specific constraints in determining scientific validity.

In probabilistic and causal modelling, Pearl’s framework for causal inference shows that outcomes depend not only on variables but also on intervention structure, reinforcing the importance of contextual specification in determining admissible inferences.

The present framework is consistent with these traditions in recognising the indispensability of context. It differs, however, in scope and objective. Rather than advancing a domain-specific interpretation or theory, it proposes a general representational architecture in which models may be expressed as extended states Ψ = (ψ, μ), explicitly combining system description and contextual structure.

The Regional Arenas (RA) decomposition, inspired by prior work (Guevara Calderón), further distinguishes the framework by providing a minimal structural schema for modelling across domains, separating ontology, constraints, representation, and empirical projection. The aim is not to introduce contextuality as a new concept, but to formalise it as a general architectural requirement and to provide a basis for translation across otherwise incommensurable paradigms.

The novelty of the present framework lies in the combination of three elements: (i) a general extended-state representation Ψ = (ψ, μ), (ii) a formal criterion for representational completeness, and (iii) a structural procedure for aligning models across domains by making contextual structure explicit and comparable. The framework is therefore best understood as a structural unification and formalisation of existing ideas, rather than as a new domain theory.

Differences between perspectives can, on this view, be modelled explicitly as differences in contextual structure μ within extended states Ψ = (ψ, μ). This allows such differences to be analysed, compared, and, where possible, aligned through their effects on admissibility, inference, and observable outcomes.

1.2 What is Novel in This Framework

The present work does not introduce context, probabilistic inference, or observer dependence as new concepts. These are well established across multiple domains. Its contribution lies instead in identifying and formalising a structural requirement that has often remained implicit across modelling practices.

The central claim is that a model is not, in general, fully specified by its state description ψ alone, but only by an extended state Ψ = (ψ, μ), where μ denotes contextual structure. In this formulation, μ is not auxiliary information but a defining component of representation, since it determines admissibility, inference, and observable outcomes.

Under this view, omission of μ does not merely simplify a model, but may produce a structurally incomplete representation. Representational completeness is therefore elevated from a descriptive notion to a formal criterion: a model is incomplete if omission of contextual structure changes inference, prediction, or judgement.

This shifts the role of context within modelling. Rather than being treated as background detail or domain-specific supplement, context is reinterpreted as an admissibility structure governing which states are valid, which inferences are well defined, and which observations can occur. As a result, ψ alone is not, in general, a sufficient or self-contained representation.

A further consequence of this formulation is a reinterpretation of apparent dualities across domains. When models are expressed in reduced form, using ψ alone, contextual structure is suppressed and distinct behaviours may appear as incompatible alternatives. When models are expressed in extended form, as Ψ = (ψ, μ), these behaviours can instead be understood as context-dependent projections of a single underlying representational structure.

On this basis, the framework provides a general procedure for cross-domain comparison. Models are aligned not only through their state variables, but also through their contextual structures. Translation between paradigms is therefore defined as preservation of outcome structure under mappings of both ψ and μ, enabling comparison without, where possible, requiring ontological reduction.

Importantly, this structural claim admits empirical evaluation. If contextual structure is a necessary component of representation, then models that explicitly incorporate μ should outperform otherwise comparable models that omit it in domains where outcomes depend on context. This provides a direct link between representational completeness and measurable modelling performance.

Existing frameworks recognise the role of context, but typically treat it as auxiliary, implicit, or domain specific. The present formulation elevates contextual structure μ to a necessary component of representation and provides a formal criterion under which its omission renders a model incomplete. In this sense, the contribution is a meta-structural constraint on representation rather than a new domain theory.

The framework also admits operationalisation in applied settings. For example, in the prospective study design outlined in Appendix D, individuals may be represented by extended states Ψ = (ψ, μ), combining clinical variables with behavioural and environmental context. Such a design permits direct comparison between reduced models P(O | ψ) and context-sensitive models P(O | ψ, μ), thereby providing an empirical test of whether explicit contextual representation improves modelling performance.

The framework as described here is restricted to the public architecture of representation. It addresses how models specify states, contextual structure, admissibility, inference, and observable outcomes across domains. It does not attempt to formalise the full domain of privately inhabited or individually modulated meaning, except insofar as such meaning enters shared modelling through observable contextual structure. A complementary discussion of this boundary is developed separately here: https://www.dottheory.co.uk/paper/on-boundaries.

2. Axiom of Representational Completeness (structural criterion)

The present framework does not merely assert that context matters. It provides a structured account of how contextual specification μ enters into admissibility, inference, and observable outcomes, and identifies conditions under which omission of context leads to systematic predictive error.

Definition 1 (Reduced state) Let ψ denote a reduced state description.

Definition 2 (Contextual structure) Let μ denote contextual information influencing inference, prediction, or interpretation. Depending on the domain, μ may encode observational, epistemic, operational, semantic, or normative context.

Definition 3 (Extended state) An extended state is given by Ψ = (ψ, μ).

Axiom 1 (Representational completeness) A representation based on ψ alone is incomplete if there exist μ₁ and μ₂ such that

π(ψ, μ₁) = π(ψ, μ₂) = ψ

but

P(O | ψ, μ₁) ≠ P(O | ψ, μ₂)

where π denotes projection onto the reduced state.

A quantitative refinement of the axiom may be given in terms of divergence between outcome distributions. Let D denote a suitable divergence measure, such as Kullback–Leibler divergence or total variation distance. A representation based on ψ alone is representationally incomplete with respect to μ if there exists ε > 0 such that

D(P(O | ψ), P(O | ψ, μ)) > ε

for at least one admissible μ. The choice of divergence measure and threshold ε is domain dependent, but the condition formalises the requirement that omission of contextual structure leads to a measurable change in outcome behaviour. This formulation permits empirical evaluation of representational completeness through statistical comparison of model performance.

The axiom provides a criterion of completeness, while the Contextual Action Rule specifies when contextual structure must be operationally included in modelling practice.

Statements in what follows are classified as Principles, Structural Claims, or Propositions according to their role within the framework.

2.1 Concrete Example: Clinical Risk Under Contextual Variation

To illustrate the axiom, consider a clinical risk prediction setting in which patients share the same physiological state ψ but differ in treatment context μ. Let ψ denote physiological variables such as blood pressure, heart rate, and laboratory values; let μ denote treatment context, such as whether the patient is receiving a specific intervention; and let O denote an adverse outcome, such as a cardiac event within 24 hours. Empirical data may then show:

P(O | ψ, μ₁) = 0.08P(O | ψ, μ₂) = 0.22

where μ₁ and μ₂ correspond to distinct treatment conditions.

Any model defined on ψ alone must assign a single value P(O | ψ), which cannot simultaneously match both conditional distributions. Consequently, such a model incurs irreducible calibration error: it will systematically overestimate risk in one context and underestimate it in another.

Formally, there exists no function f such that:

f(ψ) = P(O | ψ, μ)

for all admissible μ. It follows that ψ alone is not representationally complete.

This provides a concrete instance of Axiom 1: omission of contextual structure μ leads to a measurable divergence in outcome behaviour and a structurally inadequate representation.

This example also suggests that contextual structure μ is not merely additional information, but contributes to the admissible interpretation of ψ and therefore cannot, in general, be absorbed into a reduced state description without loss of structural clarity.

2.2 Minimal Construction: Conditional Representational Incompleteness

The structural claim of representational incompleteness can be illustrated by a minimal finite construction.

Let the reduced state space be:

ψ ∈ {a}

so that there is only a single reduced state.

Let the contextual structure be:

μ ∈ {μ₁, μ₂}

Let the outcome variable be binary:

O ∈ {0, 1}

Define the outcome distributions:

P(O = 1 ∣ ψ = a, μ₁) = 1
P(O = 1 ∣ ψ = a, μ₂) = 0

Equivalently,

P(O ∣ ψ = a, μ₁) ≠ P(O ∣ ψ = a, μ₂)

Now suppose there exists a reduced representation based on ψ alone, given by some function:

f(ψ) = P(O = 1 ∣ ψ)

Since ψ = a is fixed in both contexts, any such reduced representation must assign a single value:

f(a) = c

for some c ∈ [0, 1].

But no single value c can satisfy both:

c = 1 and c = 0

Therefore, there exists no function of ψ alone that reproduces the outcome structure across both admissible contexts.

Proposition 1 (Minimal incompleteness result)

There exist systems for which no reduced representation f(ψ) can preserve outcome behaviour across admissible contexts, whereas an extended representation g(ψ, μ) can do so exactly.

Proof

Define:

g(ψ, μ₁) = 1, g(ψ, μ₂) = 0

Then g reproduces the specified outcome behaviour exactly, while no f(ψ) can do so because ψ takes the same value in both cases. Hence the reduced representation is structurally incomplete. □

Interpretation

This construction is deliberately minimal. Its purpose is not to establish that all systems require contextual extension, but to show that whenever distinct admissible contexts generate distinct outcome distributions for the same reduced state, omission of μ produces irreducible representational loss.

The notion of necessity employed in this framework is conditional rather than absolute. Contextual structure (μ) is not asserted as a metaphysical requirement for all representations, but as a functional requirement for representations intended to support reliable inference, prediction, or decision-making in environments where outcomes depend on context. In this sense, representational completeness is defined relative to modelling purpose rather than as an unconditional property of reality.

The notion of necessity employed in this framework is conditional rather than absolute. Contextual structure (μ) is not asserted as a metaphysical requirement for all representations, but as a functional requirement for representations intended to support reliable inference, prediction, or decision-making in environments where outcomes depend on context. In this sense, representational completeness is defined relative to modelling purpose rather than as an unconditional property of reality.

3. The Contextual Action Rule

Rule 1 (Contextual action) Context μ must be included whenever its omission changes observable outcomes or the set of admissible system configurations.

Formal criterion Let π : Ψ → ψ denote the projection from extended states Ψ = (ψ, μ) to reduced states ψ. Then context is necessary whenever there exist Ψ₁ and Ψ₂ such that

π(Ψ₁) = π(Ψ₂) = ψ

but

P(O | Ψ₁) ≠ P(O | Ψ₂)

Equivalently, context must be included whenever omission of μ alters either the outcome distribution or the admissible set of configurations associated with ψ.

Principle 1 (Context necessity) In reduced descriptions, ψ alone is insufficient whenever observable outcomes depend on contextual structure μ.

4. Three-Level Architecture of Reality Modelling

Level I Model-Architectural Layer

A paradigm consists of four interdependent structural functions:

 RA4 Identity: primitive entities or state space RA3 Coherence: admissibility rules and constraints RA2 Representation: formal encoding RA1 Expression: observable outputs

The Regional Arenas (RA) architecture provides a minimal functional decomposition of modelling structure into four interdependent roles.

Minimality clarification These four layers correspond to a functionally minimal decomposition capturing:

  • ontology (what exists)

  • constraints (what is admissible)

  • representation (how structure is encoded)

  • empirical projection (what is observed)

Alternative decompositions typically either collapse these functions or further subdivide them without introducing additional structural roles. This four-part decomposition is functional rather than metaphysical. It differs from layered ontologies such as Bunge’s levels of reality and from Marr’s levels of analysis in that it is not intended to describe ontological strata or explanatory grain, but the structural roles any modelling framework must fulfil. Coarser decompositions collapse distinct modelling functions, while finer decompositions typically subdivide one of these four without extending the underlying architecture. The RA decomposition therefore targets invariant structural roles across formalisms, facilitating alignment of otherwise incommensurable models.

Functional structure RA4 → RA3 → RA2 → RA1

Structural Claim 2 (Layer dependency) Each layer constrains and enables the next; omission or inconsistency propagates structurally.

Level II Representational Completeness

A model is representationally sufficient only if omission of μ does not materially change the outcome distribution, that is, P(O | ψ) ≈ P(O | Ψ), where ≈ denotes domain-appropriate equivalence of outcome distributions. Here, as everywhere μ is not assumed to be ontologically uniform across domains; it is a structural role variable.

Level III Selection and Observation

Selection mechanisms map possible states into realised observations or inferences.

Figure 1: Representational architecture of reality modelling.

The framework is organised into three levels: ontological architecture (Level I), representational completeness (Level II), and selection and observation (Level III). Level I specifies the structural roles of identity (RA4), coherence (RA3), representation (RA2), and expression (RA1). Level II introduces the extended state Ψ = (ψ, μ), in which system state and contextual structure jointly determine admissibility and inference. Level III captures selection mechanisms mapping possible states to observable outcomes. The figure illustrates how these levels interact to produce context-sensitive modelling across domains.

5. Selection and Active Inference

Active inference (Friston et al., 2017) provides a formal model of context-dependent selection through variational free energy minimisation:

F = E_q[ln q(s) − ln p(s, o)]

Here F denotes variational free energy, understood as an evidence bound governing approximate inference and policy selection.

This drives:

  • belief updating

  • action selection

Note: Active inference is introduced here as an established formal instance of Level III selection, not as the unique or necessary realisation of that level.

Proposition 3 (Selection formalisation)

Active inference provides a formal realisation of context-dependent selection over extended states Ψ = (ψ, μ).

This connects the conceptual role of selection in Level III to an established computational framework.

6. Bayesian Models and a Minimal Formal Example

Bayesian systems model: ψ: system state μ: observational history

Proposition 4 (Probabilistic context dependence) Context dependence is structurally required for probabilistic inference.

Formal example Let ψ denote a state variable x, and let μ encode prior context.

P(x = 1 | μ₁) = 0.9P(x = 1 | μ₂) = 0.1

Then:Ψ₁ = (x, μ₁), Ψ₂ = (x, μ₂)

P(O = 1 | Ψ₁) ≠ P(O = 1 | Ψ₂)

InterpretationInference depends on the extended state Ψ = (ψ, μ), that is, on the system description together with its contextual specification, not on ψ alone.

6.1 Cross-paradigm Translation Example

A minimal but non-trivial cross-paradigm translation can be demonstrated between quantum measurement and Bayesian inference by aligning their extended-state structures.

In quantum mechanics, a system is described by a state ψ together with a measurement context μ, typically represented by a choice of observable or basis. Measurement outcomes are then given by

P(o | ψ, μ)

where μ determines the admissible projections of ψ.

In Bayesian inference, a system is described by a latent state ψ together with contextual structure μ, such as prior distributions and likelihood models. Observations are likewise determined by

P(o | ψ, μ)

In both cases, outcome probabilities are conditioned on an extended state Ψ = (ψ, μ), in which μ constrains the admissible mappings from state to observation.

Under this correspondence, the measurement context in quantum mechanics maps to prior and model structure in Bayesian inference, while projection of the quantum state onto a measurement basis maps to likelihood evaluation. That is:

measurement context ↔ prior and model structureprojection ↔ likelihood

This alignment preserves the role of context in determining admissibility and outcome structure. In both cases, outcomes are not functions of ψ alone, but of the extended state Ψ = (ψ, μ). This illustrates that contextual dependence is not specific to a given domain, but reflects a shared structural requirement, supporting the claim that cross-paradigm translation can be approached through alignment of extended-state representations.

The framework defines conditions for structural alignment rather than a full computational translation procedure. The example is intended to illustrate this form of alignment.

The preceding alignment may be stated more formally as follows. Consider a finite state space in which a quantum system is represented by a state vector ψ and a measurement context μ corresponding to a choice of orthonormal basis. The probability of outcome o is given by projection:

P(o | ψ, μ) = |⟨o_μ | ψ⟩|²

In a Bayesian system over a corresponding discrete state space, let ψ denote a latent variable and let μ encode prior and likelihood structure. Then:

P(o | μ) = ∑_ψ P(o | ψ, μ_likelihood) P(ψ | μ_prior)

Thus, the Bayesian outcome distribution depends jointly on the latent state structure and the contextual specification encoded in μ.

Under a mapping in which measurement basis corresponds to likelihood structure and projection corresponds to evaluation under that likelihood, both systems define outcome distributions conditioned on an extended state Ψ = (ψ, μ). The alignment preserves the dependence of admissible outcomes and their probabilities on contextual structure, demonstrating that the difference between the two frameworks lies not in the presence of context, but in its formal representation.

6.2 Concrete Example: Language Models and Context-Dependent Interpretation

A structurally distinct instance of representational incompleteness arises in contemporary language models, where identical inputs may yield different outputs depending on contextual specification.

Let ψ denote a fixed input string:

ψ = “The bank is closed.”

Let μ encode contextual interpretation:

  • μ₁: financial context

  • μ₂: river context

Empirically, the conditional output distributions differ:

P(O | ψ, μ₁) → “financial institution not open”P(O | ψ, μ₂) → “riverbank inaccessible or blocked”

Thus:

P(O | ψ, μ₁) ≠ P(O | ψ, μ₂)

A model defined on ψ alone must assign a single distribution P(O | ψ), which necessarily conflates these interpretations. This results in ambiguity or unstable outputs, often observed as inconsistent or incorrect completions when contextual cues are underspecified.

Formally, there exists no function f such that:

f(ψ) = P(O | ψ, μ)

for all admissible μ. The input string ψ is therefore not representationally complete.

Importantly, μ in this setting does not merely supply additional variables but determines the admissible semantic interpretation of ψ. The same token sequence corresponds to distinct meaning structures under different contexts, and these cannot, in general, be reduced to a single context-independent representation without loss of inferential coherence.

This provides a second, non-clinical instance of Axiom 1: omission of contextual structure produces divergent outcome distributions and structurally incomplete representations.

7. Domain illustration: Quantum Measurement

ψ: quantum state

μ: measurement context

Measurement outcomes depend on μ, as demonstrated by the Kochen–Specker theorem. In the present framework, this means that the observable outcome cannot be specified by ψ alone, but only by the extended state Ψ = (ψ, μ).

Proposition 5 (Physical contextuality)

Quantum systems require contextual specification for complete description.

Positioning note:

This observation aligns with contextual interpretations such as relational quantum mechanics and QBism, but the present framework differs in aiming to provide a general representational architecture applicable across domains rather than an interpretation of quantum theory alone.

8. Domain illustration: Artificial Intelligence Systems

ψ: model parameters

μ: prompt/context

Structural Claim 6 (Computational contextuality)

AI systems implement context-dependent mappings consistent with Ψ = (ψ, μ). 

Because contextual structure (μ) conditions inference and observable outcomes, the handling of context (whether through alignment, restriction, or modification) can influence how systems are experienced, although the framework itself is descriptive rather than prescriptive with respect to such use.

In contemporary AI systems, identical model parameters may generate different outputs under different prompts, system instructions, retrieval contexts, or conversational histories. In this sense, the operative state of the system is not exhausted by model weights alone, but depends on the extended state Ψ = (ψ, μ), where μ conditions both interpretation and output selection. This provides a computational instance of contextual dependence in which omission or mis-specification of μ can lead to systematic variation in behaviour, including ambiguity, instability, or error.

9. Formal Context Operator

To formalise μ, we introduce a minimal contextual operator consistent with observer-conditioned representations. In the present notation, μ is realised through the admissibility structure induced by the operator, so that the operator provides a formal representation of contextual constraint rather than a separate ontological component.

Related work in Dot Theory and CoST (Conditional Set Theory), currently available as preprints and online manuscripts, develops extended constructions but is not required for the present framework. The contextual operator is introduced here as a minimal formal device illustrating how contextual constraints may be represented formally. The framework does not depend on a specific operator, and existing probabilistic or logical tools may instantiate its role effectively within a chosen domain of calculation.

Definition 4 (Contextual ordering) Let X be a base set. Let ≺ψ denote an observer-conditioned relation:

x ≺ψ y

Definition 5 (Context filter)

βψ : X → X

with

βψ(X) ⊆ X

The operator is indexed by the representational standpoint associated with ψ, while μ is realised through the admissibility structure it induces, so that the same formal role may be instantiated differently across domains according to the informational and inferential use of the representation.

Definition 6 (Admissible set) Aμ = βψ(X)

Relation

x ≺ψ y ⇔ (x ∈ Aμ) ∧ Rψ(x, y)

Selection

P(x | Ψ) ∝ 𝟙{x ∈ Aμ} · wψ(x)

The weighting term wψ(x) may incorporate context-sensitive quantities such as precision weighting or expected free energy, linking the operator to selection mechanisms in active inference.

Proposition 7 (Contextual modulation) Context μ modifies admissibility and selection over ψ.

Reduction If ψ is fixed, βψ reduces to the identity and ≺ψ reduces to a standard partial order.

Corollary 1 (Trivial-context reduction) If μ is trivial, in the sense that βψ(X) = X and wψ(x) = w(x) is independent of ψ, then the contextual selection rule

P(x | Ψ) ∝ 𝟙{x ∈ Aμ} · wψ(x)

reduces to the standard unconstrained weighting

P(x | ψ) ∝ w(x)

In this regime, the framework collapses to an ordinary non-contextual update rule.

Interpretation: this shows that the contextual operator is conservative. When context does no work, the extended formulation reduces to the standard case.

The contextual operator βψ may be interpreted as inducing a σ-algebra, filtration, or admissibility structure over X, depending on the domain of application. In probabilistic settings, βψ corresponds to restriction of the sample space under conditional information. In causal frameworks, it may be associated with intervention structure or conditioning sets. In constraint-based systems, it defines feasible regions under admissibility rules. In this sense, the operator does not introduce a new mathematical object, but provides a unifying representation of context as a structure that constrains both the space of admissible states and the weighting over those states. This interpretation situates the present formulation within existing formal frameworks while preserving its generality.

The contextual operator βψ, as defined here, captures the role of context in determining admissibility—that is, which states or configurations are available for consideration under a given representational standpoint. In many modelling frameworks, however, context also plays a second role: it may influence how admissible elements are combined, weighted, or interpreted in the construction of outcomes or inferences.

This suggests a conceptual distinction between:

(i) admissibility structure, which constrains the space of possible states, and(ii) transformational structure, which governs how those states contribute to inference or meaning.

The present framework formally specifies the first of these roles through the operator βψ, while remaining agnostic about the specific form of the second. In different domains, transformational structure may be realised through probabilistic weighting, semantic interpretation, or other domain-specific mechanisms.

In related frameworks, such as Dot Theory, context-dependent transformations are proposed through explicit operators (e.g. ⊙) that combine or weight representations relative to an observer. These may be understood as concrete realisations of the transformational role distinguished here, while the present framework focuses on the admissibility structure captured by βψ.

10. Comparison with Existing Approaches

Existing approaches toward unification often proceed either by ontological reduction, as in programmes that seek a common underlying substrate, or by domain-specific structural abstraction, as in categorical reconstructions of particular formalisms. Other approaches, such as integrated information theory, pursue unification within a restricted explanatory target rather than across modelling paradigms as such.

The present framework differs in operating at a meta-structural level. It neither reduces domains to a shared ontology nor confines abstraction to a single formal family. Instead, it specifies conditions under which heterogeneous models may be compared through explicit alignment of state and contextual structure. Its central claim is therefore not that all domains instantiate the same ontology, but that they may be rendered structurally comparable when both ψ and μ are explicitly represented.

11. Synthesis

Ψ = (ψ, μ)

Principle 8 (Unification condition)

Unification requires explicit representation of both ψ and μ.

This formulation does not collapse paradigms into a single theory, but renders them structurally comparable through their extended state representations.

12. Testable Implications

Failures in modelling arise when μ is omitted or mis-specified.

Examples:

  • AI hallucinations

  • economic instability

  • scientific anomalies

Testable claim:

Formally, let H(M) denote hallucination rate for model M. 

Then the hypothesis is:H(M_with μ) < H(M_base) under matched model capacity and evaluation conditions. 

This predicts that explicitly modelling μ should reduce error in domains where failures arise from implicit or mis-specified context or that in tasks where failure is driven by context omission or mis-specification, explicit modelling of μ should reduce hallucination or error rates under matched capacity.

The present framework operates at the level of representational structure rather than prescribing specific outcomes. Its role is to make explicit the contribution of contextual specification (μ) to inference, admissibility, and observation. To the extent that improved representational completeness supports more coherent or context-sensitive modelling, it may enable improved decision-making in applied domains such as policy, economics, or artificial intelligence. However, any downstream effects on system performance or welfare remain contingent on domain-specific constraints, institutional structures, and implementation conditions. In this sense, the framework defines a conditional pathway (from representation to inference to outcome) whose realisation is itself governed by contextual conditions.

If agents can modify contextual structure (μ), then adoption of the framework leads to iterative updates of μ based on outcomes. The rate of this process depends on the degree to which such updates are enacted, enabling increasingly systematic recording and measurement of reality under explicitly specified contextual conditions. In this sense, the framework supports a feedback structure in which representational completeness and empirical observation are progressively aligned through context-sensitive update.

13. Objective

The objective is a representational architecture enabling translation across paradigms, thereby realising unification at the representational level. In this sense, translation between paradigms is equivalent to aligning their admissible state spaces under context.

The present framework does not propose an alternative quantum formalism. Rather, it calls for a reconsideration of how formalism and interpretation are related. In standard interpretations such as Copenhagen, the mathematical structure of quantum mechanics is closely coupled to a particular interpretive language. The present approach instead treats that coupling as context-dependent, modelling it explicitly as part of the representational structure (μ) and not as arbitrarily independent. In this sense, the framework does not replace existing interpretations but reframes them as specific contextual renderings within a more general architecture of context-sensitive representation.

This is not intended as a replacement for domain-specific theories, nor as a universal formalism subsuming existing models. It does not prescribe a unique mathematical representation of context, nor does it impose a single ontology across domains. Rather, it defines a minimal structural requirement for representation and a criterion for comparing models that differ in formalism, ontology, or domain. Its scope is therefore meta-theoretical: to clarify, align, and evaluate representations to make them more computable, rather than to replace them.

Appendices

Appendix A: Economics

ψ: market variablesμ: expectations, information, behavioural context

A simple structural mapping can be made between economic modelling and constrained probabilistic inference. Market variables, such as prices, quantities, and signals, correspond to state variables ψ. Expectations, available information, institutional rules, and behavioural factors function as μ, shaping how those variables are interpreted and how future states are inferred.

Under this mapping, expectations act analogously to contextual prior weighting, influencing the probability assigned to future outcomes, while market structure and informational constraints determine the admissible transitions between states. Models that omit μ—such as those assuming fully rational expectations without behavioural or informational structure—risk producing systematically biased or unstable predictions.

The point is not that economic systems are reducible to probability theory, but that both economic modelling and probabilistic inference can be expressed as processes of inference under structured context, where outcomes depend on the interaction between system state and contextual constraints.

Appendix B: Legal Systems

ψ: case factsμ: procedural rules, precedent

Due process ensures structured μ.

A simple structural mapping can be made between legal reasoning and constrained probabilistic inference. Case facts correspond to state variables ψ. Procedural rules, precedent, and admissibility standards function as μ, shaping which interpretations are legally admissible. Under this mapping, precedent acts analogously to contextual prior weighting, while due process constrains the admissible update path from facts to judgement. The point is not that law is reducible to probability theory, but that both can be expressed as inference under structured context.

Appendix C: Minimal Translation Procedure

Given two paradigms A and B:

Ψ_A = (ψ_A, μ_A)Ψ_B = (ψ_B, μ_B)

Steps:

State identificationIdentify ψ_A and ψ_B as the primary variables, entities, or state descriptions in each framework.

Context identificationIdentify μ_A and μ_B as the contextual structures influencing admissibility, inference, or interpretation.

Structural alignmentConstruct mappings:

  • ψ_A → ψ_B

  • μ_A → μ_B

Consistency condition

A translation is structurally valid if it preserves outcome structure:

P(O_A | ψ_A, μ_A) ≈ P(O_B | ψ_B, μ_B)

where “≈” denotes preservation up to domain-appropriate equivalence or approximation.

The alignment illustrated in Section 6.1 is a minimal instance of this condition, in which preservation of outcome structure is achieved by mapping measurement basis to likelihood structure and quantum projection to context-conditioned evaluation.

Admissibility alignmentIn addition to state alignment, translation must preserve admissibility. If x ∈ A_{μ_A} (admissible in A), then its image under mapping should satisfy:

f(x) ∈ A_{μ_B}

where A_{μ} = β_{ψ}(X) defines the admissible set under context.

InterpretationThis condition ensures that translation is not merely symbolic, but respects the constraints imposed by context in each domain.

ScopeThis procedure defines structural alignment rather than a fully specified computational translation algorithm. Its purpose is to establish when two paradigms can be meaningfully compared or mapped within a shared representational architecture.

Appendix D: Outline of an Applied Study Design

This appendix outlines a minimal study design illustrating how the representational framework developed in the main text may be operationalised in an applied setting. The purpose of this outline is not to provide a complete experimental protocol, but to indicate how the distinction between reduced and context-sensitive representations may be evaluated empirically.

D.1 Representational Structure

Individuals are represented by extended states of the form:

Ψ = (ψ, μ)

where:

  • ψ denotes system state, including clinical and physiological variables

  • μ denotes contextual structure, including behavioural, environmental, and self-reported information

Observable outcomes O are modelled conditionally as:

P(O | ψ, μ)

The study evaluates whether inclusion of μ improves modelling performance relative to reduced representations based on ψ alone.

D.2 Study Design

The study is conceived as a prospective observational cohort within a living laboratory environment in which participants contribute data on a voluntary basis. Participants provide informed consent and retain control over the use of their data.

Data are collected longitudinally from multiple sources, including:

  • clinical and physiological measurements corresponding to ψ

  • behavioural and contextual information corresponding to μ

D.3 Modelling Comparison

Two classes of models are constructed and evaluated under matched conditions:

  1. Reduced models based on system state only:P(O | ψ)

  2. Context-sensitive models based on extended states:P(O | ψ, μ)

The comparison is performed on the same population and over the same time intervals.

D.4 Evaluation Criterion

The principal evaluation criterion is whether:

P(O | ψ, μ) outperforms P(O | ψ)

Performance may be assessed using domain-appropriate measures such as predictive accuracy, calibration, or error rates.

Particular analytical interest attaches to cases in which:

π(Ψ₁) = π(Ψ₂) = ψbutP(O | Ψ₁) ≠ P(O | Ψ₂)

Such cases indicate that contextual structure contributes to outcome differentiation beyond what is captured by ψ alone.

D.5 Interpretation

The study does not aim to establish a comprehensive theory of human behaviour or health. Rather, it evaluates whether explicit representation of contextual structure improves modelling performance in practice.

A positive result would support the claim that omission of μ leads to systematically incomplete representations when outcomes depend on both system state and context. A null result would indicate that, in the present setting, contextual information as represented does not materially improve modelling performance.

D.6 Scope

This outline is intended as an illustration of how the framework may be operationalised. It does not specify a particular clinical domain, data modality, or modelling technique, all of which may vary according to application.

References

Barwise, J., & Perry, J. (1983). Situations and Attitudes. MIT Press.

Bunge, M. (1977). Treatise on Basic Philosophy, Volume 3: Ontology I: The Furniture of the World. Reidel.

Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science. Cambridge University Press.

Friston, K., Parr, T., & de Vries, B. (2017). Active Inference: A Process Theory. Neural Networks, 96, 1–19.https://doi.org/10.1016/j.neunet.2017.09.015

Friston, K. J., Da Costa, L., Sajid, N., Heins, C., Ueltzhöffer, K., Pavliotis, G. A., & Parr, T. (2023). The Free Energy Principle Made Simpler but Not Too Simple. Physics Reports, 1024, 1–29.https://doi.org/10.1016/j.physrep.2023.07.001

Fuchs, C. A., Mermin, N. D., & Schack, R. (2014). An Introduction to QBism with an Application to the Locality of Quantum Mechanics. American Journal of Physics, 82(8), 749–754.https://doi.org/10.1119/1.4874855

Guevara Calderón, J. T. (n.d.). General Theory of Consciousness (GTC) and Perception Mechanism Theory (PMT) [Preprint]. Academia.edu. https://www.academia.edu/143826617/General_Theory_of_Consciousness

Hacking, I. (1982). Language, Truth, and Reason. In M. Hollis & S. Lukes (Eds.), Rationality and Relativism (pp. 48–66). MIT Press.

Kochen, S., & Specker, E. P. (1967). The Problem of Hidden Variables in Quantum Mechanics. Journal of Mathematics and Mechanics, 17(1), 59–87.

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman.

Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.

Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge University Press.

Rovelli, C. (1996). Relational Quantum Mechanics. International Journal of Theoretical Physics, 35, 1637–1678. https://doi.org/10.1007/BF02302261

Vossen, S. (2024–2026). Dot Theory Logic Framework.https://www.dottheory.co.uk/logic


Previous
Previous

context as structure

Next
Next

On boundaries