context as structure

Context as Structure

A Representational Architecture for Context-Sensitive Modelling and Its Demonstration in Artificial Intelligence

Abstract

Scientific and computational models are typically defined over state descriptions ψ, yet in many domains observable outcomes depend not only on ψ but on contextual structure μ. This paper proposes a general representational architecture in which the minimal unit of modelling is the extended state Ψ = (ψ, μ), where μ specifies the conditions under which ψ is admissible, interpretable, and operational.

An axiom of representational completeness is introduced, together with a minimal construction demonstrating that, in certain systems, no function of ψ alone can preserve outcome behaviour across admissible contexts, whereas a function of (ψ, μ) can. The notion of necessity employed is conditional rather than absolute: contextual structure is required only insofar as representations are intended to support reliable inference, prediction, or decision-making about situational reality and does not comment on the fundamental nature of reality itself.

The framework is examined across domains and demonstrated concretely in artificial intelligence systems, where identical inputs produce systematically different outputs under different contextual specifications. The central claim is that contextual structure is not auxiliary but functionally constitutive of representation wherever outcomes depend on context. This provides a basis for cross-domain comparison and for the design of context-sensitive modelling systems.

1. Introduction

Scientific knowledge is distributed across domains including physics, computation, economics, and law. Each domain develops internally coherent models as representative of its science, yet there is currently no shared uniform representational structure across them. One source of this fragmentation is the implicit treatment of context and traditions within each model. Models are typically defined over state descriptions ψ, while contextual assumptions are incorporated informally, embedded in interpretation, or externalised to directionality and experimental conditions.

Empirically, however, outcomes often depend on contextual structure. Identical state descriptions may yield different results under different conditions of observation, interpretation, or intervention. This suggests that representations defined solely over ψ are, in general, incomplete.

This paper proposes that the minimal unit of modelling in context-sensitive settings is not ψ alone, but the extended state:

Ψ = (ψ, μ)

where μ denotes contextual structure and its inclusion possibly offers benefits. The aim with this extension is not to replace existing theories, but nuance terms of calculable realism and to provide a representational architecture within which models across domains may be translated, compared, aligned, and evaluated for formal integration.

2. Representational Architecture

Let:

ψ denote a system state or description
μ denote contextual structure
Ψ = (ψ, μ) denote the extended state

Contextual structure μ may include:

  • admissibility constraints

  • observational conditions

  • informational availability

  • semantic or interpretive structure

  • intervention or control regimes

In this formulation, μ is not auxiliary. It determines:

  • which states are meaningful

  • which inferences are valid

  • which outcomes are observable

3. Representational Completeness

3.1 Axiom of Representational Completeness

A representation based on ψ alone is incomplete if there exist μ₁ and μ₂ such that:

P(O ∣ ψ, μ₁) ≠ P(O ∣ ψ, μ₂)

In such cases, outcome distributions depend irreducibly on context.

3.2 Minimal Construction: Conditional Representational Incompleteness

The structural claim of representational incompleteness can be illustrated by a minimal finite construction.

Let:

ψ ∈ {a}
μ ∈ {μ₁, μ₂}
O ∈ {0, 1}

Define:

P(O = 1 ∣ ψ = a, μ₁) = 1
P(O = 1 ∣ ψ = a, μ₂) = 0

Equivalently:

P(O ∣ ψ = a, μ₁) ≠ P(O ∣ ψ = a, μ₂)

Suppose there exists a reduced representation:

f(ψ) = P(O = 1 ∣ ψ)

Since ψ = a is fixed, we must have:

f(a) = c, c ∈ [0, 1]

But no c satisfies:

c = 1 and c = 0

Therefore, no function of ψ alone can reproduce the outcome structure across both contexts.

Proposition 1 (Minimal incompleteness result)

There exist systems for which no reduced representation f(ψ) can preserve outcome behaviour across admissible contexts, whereas an extended representation g(ψ, μ) can do so exactly.

Proof

Define:

g(ψ, μ₁) = 1
g(ψ, μ₂) = 0

Then g reproduces the outcome behaviour exactly, while no f(ψ) can do so.

Interpretation

This construction shows that whenever distinct admissible contexts produce distinct outcomes for the same ψ, omission of μ produces irreducible representational loss.

The notion of necessity employed here is conditional rather than absolute. Contextual structure is not asserted as a metaphysical requirement for all representations, but as a functional requirement for representations intended to support reliable inference, prediction, or decision-making in environments where outcomes depend on context. Representational completeness is therefore defined relative to modelling purpose.

4. Contextual Action Rule

Context must be included whenever its omission changes outcome behaviour or admissibility:

If:

P(O ∣ ψ, μ₁) ≠ P(O ∣ ψ, μ₂)

then modelling solely over ψ is insufficient for the intended purpose.

5. Cross-Domain Structure

The extended state formulation appears across domains:

  • Physics: ψ as system state, μ as measurement context

  • Bayesian inference: ψ as variables, μ as prior and likelihood structure

  • Law: ψ as case facts, μ as procedural rules and precedent

  • Artificial intelligence: ψ as input, μ as prompt or context

In each case:

P(O ∣ ψ, μ)

defines observable outcomes.

6. Demonstration in Artificial Intelligence

Artificial intelligence systems provide a concrete realisation of context-dependent representation.

6.1 Setup

Let:

ψ denote an input string
μ denote contextual specification
O denote model output

6.2 Ambiguity under reduced representation

Consider:

ψ = “The bank is closed”

Define:

μ₁: financial context
μ₂: river context

Empirically:

P(O ∣ ψ, μ₁) → financial interpretation
P(O ∣ ψ, μ₂) → geographical interpretation

Thus:

P(O ∣ ψ, μ₁) ≠ P(O ∣ ψ, μ₂)

6.3 Reduced model limitation

A model defined only over ψ must produce:

P(O ∣ ψ)

This necessarily conflates interpretations, leading to ambiguity or error.

There exists no function f(ψ) such that:

f(ψ) = P(O ∣ ψ, μ)

for all μ.

6.4 Context-sensitive representation

Including μ yields:

P(O ∣ ψ, μ)

which produces stable and consistent outputs aligned with context.

Empirically:

H(M_with μ) < H(M_without μ)

where H denotes error or hallucination rate.

6.5 Interpretation

This realises the minimal construction in a practical observable system. Contextual structure determines admissible interpretation, and omission of μ produces systematic failure. Context is therefore functionally constitutive of representation in such systems.

7. Implications

The framework implies:

  • Models should be evaluated for representational completeness, not only internal consistency

  • Failures often arise from omitted or mis-specified context

  • Improved systems require explicit modelling of μ

In artificial intelligence, this explains:

  • prompt sensitivity

  • hallucinations

  • context window dependence

8. Limits

This framework:

  • does not specify ontology

  • does not claim context is universally necessary

  • does not fully formalise all contextual transformations

It addresses public representational structure and does not attempt to formalise privately inhabited meaning, except insofar as it permissibly enters observable modelling.

9. Conclusion

In many domains, representation is not fully specified by state descriptions alone. When outcomes depend on context, the minimal unit of modelling is the extended state Ψ = (ψ, μ). This is not an ontological claim, but a structural one: context must be included wherever it is required for useful inference, prediction, or decision-making.

The framework presented here formalises this condition and demonstrates it through both minimal construction and practical instantiation in artificial intelligence systems. In such systems, omission of contextual structure produces systematic ambiguity that cannot be resolved through ψ alone.

The implication is not that existing models are invalid, but that their scope is conditional. Models defined solely over ψ remain valid within contexts where μ is fixed or irrelevant, but become incomplete where outcomes depend on contextual variation.

By making contextual structure explicit, this framework provides a basis for comparing models across domains and for designing systems that are robust to contextual dependence. Where context is omitted, representations may remain formally defined, but their usefulness is correspondingly limited.

This work is based on further work shared across this website.

Thank you for your time,

S.

Next
Next

Bound by context