The Dot Theory: The Name-and-Claim game,

the FUNDAMENTAL Game-theory strategy of conscious EMERGENT reality. 

A pure-logic Natural Philosophy proof on pragmatic computational perspective on reality, aiming to satisfy the requirements for a Grand Unified Theory proper, written as a 4000-word essay on mathematical (computational) perspective, Game Theory, Physics, and the nature of reality and consciousness by Stefaan Vossen. 

Abstract

Logics are tools used to calculate things in what we humans consider to be the things that compose reality. Dot Theory posits that the logics driving Quantum Mechanics (QM) and General Relativity (GR) are both correct across scales, but that the mathematical formulations they are currently composed of (particularly QM’s Spinor structure and GR’s energy function and their derivatives), benefit from a formal correction for observer-induced lensing bias for them to fully describe Quantum reality. This correction involves shifting definitions of reality from one delineated by an exclusion principle (e.g., QM’s Spinors treating observational data as nonlocal and excluding observer-specific context) to one refined by an inclusion principle (incorporating probabilistically associated metadata, such as biometric or historical data, as local to the observer) as the signature components of reality. This structural mathematical adjustment, formalised through the notional introduction of a) a Mother Matrix Mμν​(ψ)), b) a calculation-by-calculation-determined cumulative lensing effect (⊙), and c) a symbolic reformulation to meta-equation 𝐸 = 𝑚⊙ 𝑐³, unifies QM and GR by accounting for the observer’s subjective context as part of its realism. Logically enabling more accurate predictions across scales within said reality.

Key Points:

  • Lensing Bias Correction: Dot Theory argues that QM’s formalised assumption on non-locality (via Spinors) and GR’s objective spacetime framework overlook local observer biases (e.g., emotional metadata, measurement tool context). Correcting this bias—via the dot operator and ψ—aligns both theories with Quantum reality’s fractal, observer-driven nature.

  • Exclusion to Inclusion: QM’s Spinors exclude local observer data by assuming nonlocality. Dot Theory proposes including probabilistically relevant metadata (e.g., “progressive-looking data”- data that displays progress as considered in subjects- from similar past events) to treat observational data as local, making QM continuous with the Standard Model. GR is then adjusted similarly and reciprocally with a bias-inclusive energy function 𝐸 = 𝑚⊙ 𝑐³).

  • Empirical Support: The theory cites EHT lensing residuals (0.318 μas) and biometric fractals as evidence, with testable applications in physics (e.g., trajectory forecasting) and healthcare (e.g., pain studies) but, for the avoidance of doubt, presents this theory here as a speculative proposal supported by a logic valid enough for evaluation and potentially implementation.

This piece of logic imposes itself on our individual and collective senses of realism. It is not a Physics paper but one that informs both on a logical corrective path by argument of logic in the field of Natural Philosophy that supports Physics. It informs us of the way our sense of realism defines our reality-defining concepts (experiences and ideas of who and what we are), and then, logically and inescapably, imposes itself on the observation that we humans, our mathematics and our Physics, currently:

  • a) describe our relationship to reality (as described by and within our current-day terms used in Physics), and

  • b) do the mathematics (and the way it is used to calculate the meaning of the data in relationship to the reality we experience) in physics, incompletely.

    At its most fundamental, the Dot Theory’s question of Natural Philosophy logic is:

What if reality is a fractal, recursive computation which is co-created by the observer, where QM and GR equations are contextual tools unified by a generative matrix quantifying subjective bias?

This maybe seen as a long-winded question to ask, but if the answer is “yes”, then the implications are that we can use existing formal mathematical and computational networks to create real-world, individualised predictive healthcare. This can be done by using:

  • Observer State (( \psi )): Hilbert space vector encoding biometric metadata (e.g., pain signals), driving measurement.

  • Fractal Seed (( k = \frac{1}{4\pi} )): Geometric constant, aligning with lensing residuals (0.318 μas, EHT 2022).

  • Lensing Effect (( )): ( ⊙ = R_{n+1} = R_n \cdot (1 + k \cdot \log(s/s_0) \cdot F_{\mu\nu}(\psi)) ), unifying QM’s diffraction and GR’s lensing.

  • Meta-Equation: ( 𝐸 = 𝑚⊙ 𝑐³ / (k T) ), where ( ⊙= 1 + k \cdot \log(s/s_0) \cdot F_{\mu\nu}(\psi) ), absorbs QM, GR, String Theory, LQG.

  • Mother Matrix (( M_{\mu\nu}(\psi) )): Generative tensor, producing fractal topology via ( \nabla_\mu F^{\mu\nu} = k \cdot J_\nu ).

Demonstrative Logic:

  • Consistency: Recovers QM (( \Delta\theta_i \approx \lambda / d_i )), GR (( \Delta\theta_i = 4 G M_i / (r_i c^2) )), with fractal corrections.

  • Empirical: EHT lensing (0.318 μas), biometric fractals (pain studies), molecular dynamics.

  • Testable: Refine lensing residuals (2026 EHT), map ( F_{\mu\nu}(\psi) ) in healthcare, build ultranet for AI.

Why Consider reality to be fractal?

  • Unifies physics and consciousness, unlike String Theory or LQG.

  • Aligns with Gödelian limits and empirical realities (measurement dependence).

  • Positive practical applications: healthcare, cosmology, AI, experimental physics and resource management.

Challenge:

Test it. Debate it. Reframe reality as participatory.

This premise, as a formal representation to reality, may to some not feel “complete” or final in the classic Cartesian sense. This because it displaces the concept of measurement to that of a memory of meaning. This, again to some, leaves this theory “open” to debate. The position of this author is firstly that this aligns with a sense of realism that is in the present and this broader website details a paper of which the logic described here can be better understood and wrote this website to provide surface to debate on and welcome interactions here or on social media. Secondly, as a meta-theory, it invites us to make applications of existing tools better. It would be difficult, scientifically speaking, to argue against the inclusion of improvement.

This then, as a theory and a novel method that is useful to demonstrate and observe this incompleteness as a mathematical object, offers us a method to calculate its topology relative to other, most-like, interpretations as a lensing factor. When this is followed by applying it correctively (as a now, more informedly and adjusted computational perspective) on the data used for any standard calculation, it will (so it argues) inevitably, provide access to a variable-constant based computational methodology that calculates experienced reality more accurately than any other prediction that can be used using classic linear logic and algorithms only.

As a piece of logic in algorithmic science, it is the demonstration of what is a codable and salient heuristic nuance between the ultimate binary blackboard of Rational and Irrational strategies. To be more specific: two rational Game-strategic (von Neumann reference) motives and one irrational that could be classified as:

  • the motive to maximise strategy of (relative) self-improvement or betterment

  • the motive to gain victory.

  • the third, irrational motive: wanting to self-reduce or lose

Rational, by this definition then comes to mean “come to with information and ratio, containing of data”.

In other words; where previously logics expressed the motive as to win or to lose, now it nuances and puts the available data into algorithmic terms that spell out the motivational difference between doing something to get better and doing something to win. This nuance makes the computation not only generally relative, but with enough available data general and recursively relative, i.e. we come to know how something is probably going to go by asking questions that want to know what trajectory achieved the most anticipated/desired target for any known object most-like itself. It is in essence a different way of asking questions about the data available about reality in a scientific world that accepts that we are real and self-iterative.

The paper argues that the inclusion of the translations into code of this expectant but informed computational perspective and expansion of the mathematics of Quantum mechanics to our current understanding and translation of it, makes the existing (used) approach able to give better answers, by formulating better questions from systems that can evaluate the already available databases as correlative metadata for emerging patterns.

Rather than only asking whether the null hypothesis is correct, it also invites us to observe what can be learned from differently similar prior analogs. In other words, dot theory is a logical proposal for a more precise algorithmic definition of reality. Whilst not a “complete” definition in the Cartesian dualistic sense, it is teleologically complete and defined by the actions it undertakes. This, after all, is all a theory ever needs to do to be valid.

Introduction 

When naming something as any one thing, you inevitably attach context to a word. Name or bind a set of relations, together with their metadata, to any word or symbol that, to you, comes to mean that very something you meant. By its utterance (internally or externally) alone, it brings it into existence, even if briefly, inappropriately, or incorrectly, in (un)spoken or written language for another to imagine or notice. You turn a vague “some” into a “thing” and make it visible to others. Making it data. Quite magical when you think about it. This means that in that sense, reality is data. You just need to know how to look at it.

Whether noticed or understood enough by others for the information to have been shared and given an agreed meaning that this data is indeed “that thing”? To debates whether to be noticed to enough of an extent for it to be recognised as “it”. This is where notions of hierarchy, value, change, evolution, and culture emerge. These are subjective contextual data sets and feelings. 

Whether as infant or ancestral hominid, when we first uttered sounds, they came to represent and mean pain, excitement, fear, or pleasure. Latterly, as either evolved, sounds became words, and languages were created. More memorable cultural languages, professional languages with big, noticeable records in books and institutions lasting longer than others. But also, Personal languages, woven from the languages of others near and far that are therefore never truly personal. Yet simultaneously impossible to fully experience by another and incomprehensibly uniquely individual. 

Limiting as that may occasionally seem to be, language, communication, and the act of creating languages, have brought us increasing levels of self-awareness. This remarkable ability we now have access to, and are able to reflect on (on a never-before experienced, and hard to imagine scale one century ago) is courtesy of the explosion in technology and data calculation tools following from the development of Quantum Field theory. 

With the past 20 years of data collection, we are now, demonstrably, at a stage of being able to recognise from very far afar, deeply personal levels of inter-personal psychodynamic communication trends and habits through the tracking of fashions, trends, memes, and schemes that are thematic of human expressions. We have now moved from a grunt faintly expressing pain to understanding the sequence of individualised psychodynamic stimulus- sequences that would promote or hinder healing. Our understanding of things has reached well beyond the mere calculation of the trajectory of planets, and into that of individual experiences. 

Demonstration 

This remarkable ability to predict individual real-world lived experiences is ultimately only one end-product of the act of naming things. Without naming things we would not, as we are now, be able to recognise that our fellow human beings are in every which way identical to us, had they been born in the time, place, and body we did. That “consciousness” is an emergent property defined by the observable traits of its container.

Our individual commercial and political predictability observed in data management today makes that obvious and inescapable. But, we resist, we are and can be so much more than that, and of course we are, but naming things, the language of creating them into being, is not the only purpose of language. 

It can also be used to compute as well as express the language of the subjective world of interpretation and feelings. This language confusingly uses many of the same words and symbols from the “language of things”-world, although the words used to express feelings are more often used interchangeably and acquire far more meaning from their context than the words describing our more predictable behaviours. 

It too creates communicable reality, albeit in a different mental and personal experiential framework. A fine example being the uses of the words “Fear” and “Love” or the idea of having both words in the same sentence and in literary proximity. Or choosing to point that out etc. There, both the choice of words, their use and logical deployment are each, and individually, emotively responsive and appear to colour the individual’s landscape of the known literal meaning of the words.  

That secondary layer of data is a more subtle and interwoven meta-layer as it is the product of the Personal language that references to the objective, literal layer. The meaning of what we mean, so to speak. This Personal language, as stated before, cannot possibly be fully experienced nor fully understood by another, and therefore can be considered a complete information “black box”, a void and unknowable entity.  

Proof 

Yet fortunately, and as demonstrated by Quantum Mechanics in the world of objective observations and personal data monitoring, it is clearly also not critical for the world of subjective observations (the personal experiential world) to be fully known, for it to be known enough to successfully have significant aspects predicted of. This second layer of enmeshed information too is probabilistically accessible through contextually shared patterns of known (observed) and interesting (teleologically relevant) causes and outcomes. An inherent co-dependency emerges: If I fall over, I sometimes get hurt. If someone else falls, they sometimes get hurt. If someone else pushes me over, it sometimes hurts me. If I push someone else over it may hurt them. If they have a diagnosed motion disorder, they will be even more likely to get hurt if someone pushes them, etc. What is more true for others that are most like me, is probably more true for me than what is, on average, true for everyone else. 

Through these shared recorded, or remembered, experiences, we develop notions of shared languages where ideas or concepts are quite precisely and deeply understood in shared, yet diversely expressed, ways. These widely diverse shared languages, with widely varying potential-user-size (from 2 to the total/infinite user group size) are each rooted in semi-homogenous shared patterns of experiences, traumas, and contexts.  

In their pattern analysis, we realise and observe recurring patterns with varying cluster-size yet reliable repeatability and association (emergence) to wider trends (age groups locations, education, socioeconomic changes, migrations, cataclysms in environment etc.). 

This process informs/widens the calculational perspective, or, otherwise put; changes the way and context in which the data is evaluated, what against, by, and how. Remarkably, when the data defining the observation is processed as such prior to any standard computation, this process will always and inevitably offer the potential to improve the outcome of any prediction or calculation when compared to the standard prediction/calculation.  

Description 

This, now altered, functional relationship to data has an odd Easter egg-laying, quasi-solipsistic, involuntarily evolving, inherently progressive, and inevitably benevolent-appearing mathematical encodement as its fabric.

It is the fabric, the fundamental necessity for it to be able to be real and observable to us, and exists because it is in everything that “is” in everything that is perceivable to us. As such, it evolved with us as we evolved with it, to use it to our benefit by perceiving it, understanding its potential and applying it creatively, with the successful versions living to tell the tale of its survival in our surviving languages.   

This of course, only when willing to accept logic and/or mathematics as the fundamental expressions of reality in both language, mathematics and physics (and that therefore the personal, individual experience of reality can, so to speak be known at birth, but is not predestined and instead involves a series of binary choices that create options for more refined personalisation or predictability and the formation of unique individuality). 

When considered as a game strategy, this could be rated as The Utmost successful game strategy in the universe. But it has no equal, i.e. There is no alternative strategy by which to operate, leaving the use of the term “success” strictly speaking underdefined, but needed as relative to the observer’s opinion or perspective. This is because it is, from another perspective, not just a “strategy” but a “lens” through which we perceive and use the constructs and tools we create our understanding, meaning and value of the world from. A lens that would not exist if the strategy wasn’t/hadn’t been successful. Their temporal relation means that there is no existing knowledge on prior of unsuccessful lenses. Demonstrating a form of evolution that functions on a chain of one-on-one.

The Dot Theory’s application is to crawl up that unique, unbroken chain to the data (the “game” that can be played with the data) resulting in an ever- refining reduction in observer-lensing present in all (recorded) data used for any calculation undertaken by data that has been upcycled by correcting for the inevitable lensing. This is based on known pattern-formation in associated information known about most-like observations. When done prior to feeding for predictive calculation (answering a question), the now improved, de-lensed, data used in the (standard) calculation makes for a now, comparatively speaking, improved prediction.

It is seeking the more correct answer using the same tools in that process, whilst admittedly using probabilistic calculation for its apriori computation. 

Its theoretical benefit lies in that its application’s corrections to the relationship between the observer and the data results in improved predictions (using standard calculations) where experiments or target predictions are undertaken. Healthcare and Experimental Physics are such obvious spaces requiring tools for the predictive analysis of multidimensional (subjective/objective) data layers. This in turn has simultaneous and significant direct applications in human health and welfare and energy resource management and creation.

As such, this paper is a reasoning for the acceptance of a proposal that adjusts the terms expressing and defining of the conditions of our current understanding of reality. This to one that considers causality a co-creative act and describes its smallest particle as massless and fractal in structure and Planck-scale in proportions.

Evolution 

This oddly offset (partial, yet complete) but usefully pertinent change in meaning (created in the space/event between strategy and lens) in how we can compute our perspectives on reality, is the function that underpins dualism. It is this emergent “real making” function of defining mathematical and conceptual nodes and edges, that defines the traits of what will retrospectively, in their analysis be seen as useful or useless. This ability to define progress as “beneficial relative to the observer” can be seen as underpinning the fullest meaning of the terms for meanings of “evolution” in both their genetic and progressive senses as well as the conscious and self-aware senses.  

As a data-analytical strategy, it becomes remarkably interesting for predictive algorithms in subjects like health and wellbeing. As a more general concept in physics and philosophy, it formulates (when applied in each field) as best as is possible and within the practical limitations of mathematical reality, what can be presented conceptually as a Grand Unified Theory. This both in its presentation and logically demonstrable effect, if perhaps not in its appearance. 

Function 

Unlike other theories, however, it is not a method for the improved understanding or calculation of any one given portion of reality (position, motion, trait or hierarchical relation) but one for the improved understanding of the observer’s known relationship with the fabric (or data composition) of the portion of the observed (and calculated/ understood) reality itself. This renders all calculations contextual and relative to prior opinion. As a theoretical function, this theoretical approach recognises opinion as inherently biased or flawed in specific patterned ways that are diagnostic and indicative of optimisation opportunities. This makes them teleological weighed and entangled objects and reliably represents reality to us, human users.  

This improved understanding of the observer’s relation to the data is useful in improving our calculations and assessments of it, using only existing methods. This, in turn, usefully results in improved predictions without the need for significant changes to existing systems and calculation methods. 

Whilst in fact a declaratively dialectic logic (“check your existing data for other data apparent within it where possible” as a strategy is self-evidently positive overall) its computational functional remit is widespread and can be present in all notions of debatable ‘reality’, it is most readily applicable, useful, and topical where clear formulas and calculation methods of defined data matrices already exist, such as in physics, computing or any data-containing process management. 

A note on formal structure of this logic: unlike most formal theories, it is a non-axiomatic, non-monotonic logic in the Babylonian tradition. This unusual theoretical structure is what makes it paradigm-shifting, as accepting its premise as real, invites the opportunity to ask how we humans individually and collectively relate to the data describing reality and therefore what meaningful outcome(s) we consider to form part of our collective and individual notions of predictable reality. This translates to accepting a co-creative sense of realism, mirrored mathematically as a formalised teleological approach to Game-strategic mathematics. 

Wittgenstein – von Neumann-Langland-Consciousness realism 

The Dot Theory’s proposed theoretical Logic system, could best be represented as operations in a computational model of Wittgenstein's logical atomism on data, processed using von Neumann's architecture within a wider framework architecture (Langlands). It would attempt to represent and compute individual answers on reality as a complex network of facts and logical relationships, while also recognising the inherent limitations of such representation as outlined in Wittgenstein's philosophy (This being as self-referential and “needing to be shown within the structure itself”) by being able to observe how the data changes within the program evaluated or observed.  

Taking this view on realism would view the Langlands program operating it not just as a set of mathematical conjectures, but as a visible and temporal framework available for a refined understanding of the nature of mathematical reality in the way it describes itself as (and our relationship to it as) being a function of reality, rather than of an objective sense of reality itself. 

This observer-bound coding limitation of representations (ability to express/say what is observed/experienced) is here, instead of being seen as limitation, translated as the meaning-containing framework of the representation, its metadata.  

In the Dot Theory’s case, the suggestion is that for personal, subjective matters, the individual observer can be considered to be that limitation, and that their meaning as definable by their self-descriptive data and its statistically associated metadata is the framework’s limitation. This translates to a practical computing language for cheap, digitally-mediated, real-time and individualised predictive healthcare and decay-pathway optimisation.  

For other pathway-analysis and individual scientific objective matters (concepts), the data describing the measurements and their metadata (measurement device and known environmental circumstances) and the rules describing them (sciences) would describe each framework’s limitation.  

This can further be interpreted as a formalised opportunity for systematic de-lensing of the computation so as to remove bias that occurred during observation and simultaneous as an acknowledgement of the needs equally (even if differently) met by other emerging formats of Conscious-data theories on reality.  

Observer-dependency and Artificial Intelligence 

The central premise of the Dot Theory is that reality is fundamentally a co-creative, observer-dependent, and most fundamentally, an individual experience-dependent construct.  

This theoretical mathematical position offers consistent opportunity for dual use in Ai-based pattern analysis:  

  1. subjective and self-referential frameworks (of which of course our current description of physics and all its formulas used to describe reality is one) as seen in healthcare, education and culture and 

  1. Mathematics and physics where the meaning attributed to the numbers is agreed by each individual science and its conventions. This, among others, translates to energy and resource management, decay-pathway prediction and fusion-based collision-control experiments. 

Constructs of concepts are such individual experiences represented in shared ideas or understandings, and they must, for integrity, always be considered as such. This is significant, considering all notions of reality, including ideas of shared reality such as formulas are formed by such concepts and form related concepts that define the form and function of the constructs. This also positions notions of consciousness as the meta-structure that perceives reality but cannot perceive its own framework and cannot define what it is, only what it does. 

One could therefore constitute that, data-technically speaking, consciousness is an individuated construct of degrees of accuracy of the relationship between data seen and data understood (absence of lensing). 

Theoretical Structure 

The Dot theory’s theoretical structure is as per that of concepts of Game-Theory, used widely in physics, computing and process management, where it is conceived that different function-strategies produce different outcomes, which in turn become the building-components to other games played out by variously informed (intelligent) strategies in more, or less, complex sequences to varyingly performing outcomes, under varying circumstances, with varyingly statistical confidences, subject to varying criteria and purposes.  

This is as analogous to prediction and evolution as it is to emerging properties becoming observable, measurable, and calculable by new methods of observation, interpretation, and calculation. Each are steps used in formalised Game strategies to make up their construct or recipe and inherently define a user-relation toward success. This, in a sense, studies the logic behind the motive of implementation of a strategy. There being only 3 strategies, (to win, to become better, to lose in any sense- the last of which is irrational) this defining trait asks the question whether the motive of the chosen strategy is to demonstrate something rhetorically or to dialectically identify a better answer from the now combined available information.

Applying Dot Theory to any observation (and the data describing it) can be positioned as being: The process of defining the construct of meaning that defines the observer-defined strategy that confidently defines the path the observer will, albeit when viewed in future retrospection, be most likely to describe as having been most like what they would then be likely to describe as the successful outcome. Most like the thing they could have known would happen (if they knew everything there is to know and once having taken all the facts into account). 

This is why, in principle, this realism accepts that any person born in your body and bed would do the same thing as you are doing now. 

This, because applying Dot Theory to the data would invite the inclusion of data previously observed to describe the most frequently and most successfully used method, used under most-relevantly similar circumstances. This creates space to observe these relative changes in topological behaviours of the data over time inevitably offering improved predictive capability. 

In this Game-theoretical sense, the idea of Evolution over time is then the Seen-to-Take-All-winning strategy. This, because it is then considered relative to the self-referential framework of teh human species. The classifying group, however discreet about themselves, inevitably sees themselves as the linguistic authority and occasionally resultantly as the more valid perspective, or otherwise put; how "evolution is shaped by the survivor” or how “history is written by the victor”). This then resonates well with existing interpretations of these classification strategies and notions of historical revisionism and lends its support to good integration of Dot theory with existing models on computable reality. 

This formalised systematic maximising of existing models’ intelligence is then, in some senses, the process of choosing to proactively apply that broader analytical strategy predictively by asking from the available data what the most informed decision would be and offering it as data. By including new layers of associated and probabilistically valid information, this theoretical structure provides a meaningful shift in computational framework presented as predictable reality, and enables the inclusion of layers of associated metadata in such a way that offers valid predictive probabilities and tangible real-world solutions.  

This is analogous to defining success, progress, or evolution and as such, one could argue that “Dot Theory” as a formal logic is not a formulaic entity but a method of realising this process. A process of identifying more accurately what is more true/real computationally and one that calculationally only uses what we already know to have been true. 

Applications and composition 

  1. Computer Science  

The composition of reality can technically be thought of as the observed and recorded data-trail and by additionally considering any available, in any way associated, data with wider, also associative, and therefore potentially predictive, patterns, in any given evaluation and at any given stage possible we, inevitably, are probable to improve the outcome and benefit gained from it. Especially when compared with the less analytically considered options. That is the Dot theory’s computer-scientific translation of its Game strategy: When possible, consider all possible but relatively relevant data as such (relatively in that it has been statistically weighed) for inevitably better decision making.  

Doing that in real world calculation had been irrelevant until, and is impossible without, the advent of Ai and Quantum computing. This makes this theory timely in its emergence in computer science’s theoretical landscape.

The computer-scientific identification and appreciation of the algorithmic composition of data and their mutations over time (recorded as meta-data), now coherently forms the recipe for the composition of the observed reality in a computational landscape. Thereby, it defines its relationship to reality (to the best degree it can express itself), congruently within its own understanding of identity-defining notions of success relative to human users of the theory.  

2. Physics 

In Physics, the application of Dot theory’s logic is reducible to only requiring the modification/ manufacture and additional inclusion of an associated infinite-Hilbert space data matrix at the level of the Spinor, consisting of data (the compute’s metadata) describing any available, relative data, where any known pattern-associated statistical confidences are being represented in the mathematics used to make predictions in experimental physics. The topology of the string of code and its relationships to the data being calculated would then more accurately describe the reality of any event.  

It also reframes our individual relationship with our current idea of the scientific understanding of the expression E = m c² to a provocative 𝐸 = 𝑚⊙ 𝑐³*.

Here the ⊙ constant represents the per-datapoint-considered but cumulatively represented lensing/bias. Per “data-point considered” in that the available metadata points are each lacal, observation-based data points, each contained within the equation that is represented, and can now confidently be associated to an individual-specific variation. It is a dialectic modification on Einstein’s original idea of a Cosmological constant. The translation presented here successfully absorbs the current issue of perception of an oddly expanding universe and presents reality as computable across all scales. 

Note*: the choice of cursive annotation to denote that Energy is now not considered objectively and independently real, but its function is. This is as it would be perceived as an object of data within a Langlands Program. 

This is necessary, even if that at times makes no difference to the outcome of calculations undertaken in some aspects of Quantum Physics. It does, however, make a difference if Ai-based calculations of reality were made available for such predictive purposes.  

This conditional relationship with data is already currently seen every day in the use of Ai in predictive calculation using the Quantum-entangled relationships described in von Neumann’s work and is already offering evidence that this extending this by reframing is only mildly little restrictive in its adoption as an innovation in Physics.  

3. Mathematics 

It is fundamentally, and coherently definitively impossible to represent the Dot theory in a singular and immutably exact mathematical formulation (as they themselves are the emerging, observer-bounded frameworks that are being pre-analysed for lensing factor) if it were required to reduce, in a mathematically coherent fashion, Dot theory’s idea of including any or all available additional data matrices to all the potential formulas considered (make it omnivalent). This is a logical defining facet, not a flaw or failure.  

As a theory it does so unapologetically because it makes sense that in a landscape of theoretical evolution where Classical theory required physical observational proof and General Relative theory required mathematical observational proof, a GUT like the Dot Recursive Relativity theory could require logical observational proof. This makes this proposal’s innovative nomenclature, besides having the immediately effective impact of the conceptual elevation in meaning of the formulations and equations of Quantum Physics to that of making the entire observable reality calculable, intellectually reasonable. This, simply and necessarily, as each formulation is conditional; computation/question-, and scale-specific by its definition of reality. 

The equation’s formulation is therefore dependent on which data is to be considered for which calculation, and for which purpose (in relation to what other data). But the inclusion and extension of multivector representations with Clifford convolutions and Clifford Fourier transforms in the context of deep learning would represent its mechanism fluidly. 

It (the mathematics describing reality) would then become formulated as the question to pose to trained Ai, enabling it to exercise its ability to confidently give relevantly-predictive context from existing and associated historical data against composite data-avatars representing the individual or decay pathway experiment. 

Application 

What is required for it to be applied successfully is a) acknowledging it as an entirely valid computational perspective on the information making up the reality under analysis and b) processing the data describing the equations represented successfully as patterns describing observed reality as data in a computable landscape. 

This is permissible because we must inevitably come to realise that:  

  1. logic and perception, are the first and fundamental entry portals to all understanding of reality (both the demonstrable and repeatable ones, as the individual and emotive ones) and that  

  2. we must decide that what we observe is real. Whilst both are born from, they are in every sense equally limited and defined by our understanding of reality. 

Conclusion 

We have now (is the position of this and the associated Dot Theory papers) coherently and logically displayed sufficient evidence to demonstrate that the real-world experience of reality exceeds our current use of our relationships to mathematical frameworks, and that this inescapable insight requires, if not forces us, to reevaluate the limitations of our currently used perspectives in our computational relationship to the mathematical frameworks used in the calculation of reality. There is no issue with the mathematical frameworks themselves, there is only a potential performance issue associated with the way we choose to view and describe our relationship to them. 

We must therefore now use applications to describe and offer for evaluation the data describing the changes in the framework's shape over time. This change in relationship or perspective to the data, is a simple, yet effective suggestion made in the Dot Theory’s premises presented as an evolution on the Theory of General Relativity. In doing so, it imposes by pure logic argument alone, adequate evidence for consideration of an alteration in our relationship to the fundamental mathematical objects used to describe our relationship with predictable reality to those presented Dot theory’s Theory of Recursive Relativity. 

Implementation

The implementation of this theoretical change imposes a) a significant shift in available real-world computational capability (compute) and b) carries a somewhat imperative requirement on improving the public’s general understanding of information processing, learning and computing. This is here presented both conceptually and socially with immaculate timing and well within the remit of the projected future for humans and Quantum and Ai applications.  

As such, this paper presents logical evidence that we, in all probability, already function in a Wittgenstein-von Neumann reality and should therefore then logically consider making the nomenclatural changes needed to reflect that. The proposal of this paper reflects the reality that we live in a reality that can predict interpretational outcomes when analysis of data representing reality as being such, is paired with high-powered Ai. It also says that this can safely be done to industry standard when data is considered as a topological landscape in a Langlands program. It presents that we therefore ought to potentially consider updating our human and operational linguistic representation of reality in both physics and mathematics to reflect that. 

In this sense, despite its world-view shattering effect, Dot theory is not much of a formula or even a constant. Even as a constant it is not much of one as its numerical values are context-relative. Its paradigm-shifting innovative construction however is that of a logic for the implementation of a functionally and logically consistent, Universal, yet relatively variable, lensing/bias-corrective constant. This is only one, yet inescapably logical idea to be kept in mind when considering making predictions. This proposal creates a mathematical space for data layers to confidently reside relative to the Universal constant of the speed of light. 

Its implementation means that nothing can go faster than the speed of light. No thing. No object that has been seen enough to have become a noted and named “thing”. But then there is the group-definitional) class of “those things that are not yet things” (i.e. algorithmic expectations of information) which can, as recently confirmed with the quantum tunnelling of data. This leaves a wide-open computational space for behaviours and clusters of “behaviours that have not yet been observed with enough confidence to have become a thing”. This innovative take on data-representation, creates a wide-open space to compute new data. Data that was perhaps first seen as anomalous in their new found presence, or observed as evidence of logic but from interplanetary space object to wave collapse, it is a principle that stands and confidently represents computable reality to us humans.

In closing,

This theory requires us to accept that as conscious human observers of reality, we are finite singular objects in a, perceptually speaking, undefinable space, and cannot fully perceive the future in absolute terms, only anticipate reality relative to our anticipated perceived needs. Nor can we know the past in absolute terms, only our interpretation of it relative to our current perceived needs, and that even then, we ultimately only know it as the unique translation of one individual’s conscious perception of success. But that leaves us with two very valuable computational capabilities:

  • anticipate reality relative to our anticipated perceived needs and

  • our interpretation of the past relative to our current perceived needs

What this means in other terms, is that accepting Dot theory as a conceptual framework for reality is accepting that a) an individual observer cannot travel in time, that b) any one thing can only “be” in pragmatic absolute terms when we are fully and irrefutably aware of all possible data describing the observed present - things are facts if everything that can be known about them are considered known. It also means c) accepting that you can use a Time and Space-based relation to predict, plan and make better choices as optimised relative to you. None of these 3 premises are conceptually troubling and all this while accepting that you are, ultimately and inevitably, travelling on a composite of many maps.

This as opposed to the current mechanistic methods by which Spacetime would be considered relative to human reality rather than individual consciousness. By this theory of reality, they travel by their own map, drawn from experience and only influenced by study and language, in which much learning can be done, but may on occasion only pick up late on the identification of a thing as a thing as this is relative to education/available information. How much is objective reality and how much is learned and conditioned forever remains unanswered as every helpful answer is accompanied by a note that more research is required.

This slow-trickle-down and delayed recognition can be seen as the byproduct of the overall strength of the institution supporting its opposing view, but it must be borne in mind that the rigour often associated to institutional rigidity can also introduce reliability to innovative descriptions made by their students. These can then more confidently be combined and can, on occasion, generate innovative understandings.

This, again in other terms, says that the somewhat anticlimactic paradigm shift from the current, classical objective and independent paradigm of reality to the fractal one of Dot theory, is analogous to whether or not to choose to make use of things known to be true for other observations most-like the desired observation (information). As such, its invitation, if accepted, simply offers us opportunity to become more independent from non-observational data (opinion), and more often have the opportunity to exert free-will. It conceptually shows associated predictive correlation, and its acceptance and implementation makes it more likely to be to the observer’s benefit. This, and its implementation, in every relevant way constitutes a valid theory on reality.

What the spirit of this theory also reminds us of is that our theories must be able to thread and tie back definitions and their formulas to a relevant definition of reality when discussing theories and frameworks on reality. This continuous, uninterrupted thread of accurate translation, is in the author’s opinion what was missing. A solution to a pragmatic error in contextual translation of the meaning of the linguistic framework used to describe reality in Science, if you will.

End 

Stefaan Vossen

Personal closing statement:

To create is to express something that is not yet known to have been true within a known and shared framework. It is to state and declare to others a now observable and previously undefined fact in an agreed concept of Spacetime. Creating is to make it so that something is now observable within that framework, when it had not previously been known to be.

Inevitably then, it is not only a declaration that something is, but also that it could or should have been this before becoming what it is now said to be. This creative action’s potential to be useful and consequentially have value, introduce contextualised opinion and judgement, giving the act of creation itself an infinite yet defined sense of contextually derived potential for meaning and value. Even if only to definitions agreed with others within the shared observer framework.

This pathognomonically means that creation, as an object in Spacetime, can by no definition be an entropic fact until it both exists as a creation relative to observer attention and that the impact of observer attention is understood as relative to observer opinion. Only then does creation have entropic existence derived from observed purpose and does its usefulness make it a fact of sorts for us to interact with.

Yes, “it is because it is”, but rather than being a repetitive “is”, the heuristic categories those two aspects of creation are in always differ. It is the tension in those differences that can be seen to create the wave collapse. The perception needed to help us cross the bridge to a central and reliable idea of shared reality. Creation itself is then in every sense an unobservable, quasi-solipsistic singularity of sorts and life is the record of what you could demonstrate to others.

One from which shared reality emerges through the observer’s reality-creating, singularity-skipping, free-will and attention. The subjective foundation to experiencing consciousness. One we can’t be human without, yet one we humans can’t see the creative act or its usefulness of until after seeing the distractingly magical creative fact.

To create is to fear dissociation and the cultural risk that the effort does not pay off, that it does not become real in the eyes of others. Or worse still is contemporaneously found to be false. But without taking this risk, creation, evolution, and therefore life itself are simply impossible. This might at times seem unbearable, but remember that whilst there is no true creation without risk, there is plenty of creativity with even just a little.

Managing how much of either and when is about more fact, less opinion and more user free-will, all known to be spoken in our teachers’ languages. That insight is needed to come to understand that real living is the process of creating life and that comes from the developing of individual self control to enable the realisation of better decisions (creation). All of life’s education then is what frames and manages our creativity into the spectrum between rote and true creation. Simply put; Humans grow by making them curious about something. How they grow depends on how fairly they can get it.

History then is not just a story of victors and survivors but on the language life uses to tell us its parable.

What follows is a paper written prior to the one above, and was its inspiration:

Why does Quantum Mechanics calculate reality as if all the data representing it is local to shared reality, but nonlocal to the individual,

when the data describing visible reality (and meaning) is clearly local to us? 

Or; how the terms of the General Theory of Relativity and Quantum mechanics put its knickers in a Twistor to avoid consciousness. 

By Stefaan Vossen 08/11/2024 

In an ideal world, the mathematics claiming to describe reality should reflect reality accurately.  

Today’s structure of the mathematics of Quantum Mechanics however, calculates the data describing it as nonlocal relative to the observer, yet it is self-evident that this is not our only relationship with the data describing our relationship with reality.  

The Dot Theory demonstrates that quantum mechanics works smoothly across the Standard Model, when we modify the terms describing the Spinor’s mathematical structure, and enable the calculation of the data describing reality as being local to us. At least, on condition that calculation is made to predict a trajectory approximating a previously made observation.  

On occasions where, even if conditional, self-evident truths like this appear to contradict the terms of the mathematical description used to describe reality, a prompt re-examination of the mathematical terms is required. 

This paper is a pure logic argument inviting physicist and mathematicians to reconsider the use of the current defining terms and geo-mathematical structure of the Spinor as used in Quantum Mechanics, on the basis that they self-evidently do not consistently reflect the conscious, in vivo, terms of the full spectrum of the relationship between the observer and the data describing observed reality. 

In Quantum Mechanics, the set-definitional terms and functional structure of Spinor mathematics calculates the observational data consistently as if it were nonlocal. This, in line with the consideration that the data used to describe reality is nonlocal.  

If we look at the relationship between observer and reality in vivo however, cursory analysis swiftly reveals at least some data to be local, as evidenced by the success of the derivative equations used in weather and financial data forecasting methods. Further evaluation demonstrates this predictive phenomenon to also be conditional and only available wherever historical data is available for more-accurate trajectory forecasting (prediction with real-world prior) on most-like events, providing data lakes of progressive-looking data*.  

Our experience and observation of reality therefore is in its material part, quantum. This is self-evident from our success with classical, cosmological, weather and financial predictions, yet the mathematics used in Quantum Mechanics does not appear able to consistently treat all portions of the data spectrum representing reality into its equations. Where it treats data that is local to the observer as nonlocal relative to the observer overall, it catastrophically fails to compute it. 

This from the position of the Dot theory, is because the mathematical terms only accurately compute those terms that are nonlocal. Resultingly, Quantum Mechanics, conceptually and in its current mathematical structure, fails to absorb the fact that to our subjective, conscious experience, the real world is data. A world where our observations and measurements are, in fact, to be computed as, relatively speaking, metadata. 

And therein lies the asymmetrical superimposition to the issue with the Theory of General Relativity where the exact opposite terminological error could be adjusted from by reintroducing Einstein’s gravitational lensing constant but making it relative to enhancing E to fE or E¯.

It is the Dot Theory’s mathematical proposition that Quantum Mechanics becomes more efficient when the mathematics used reflect reality as data and the data describing our conscious experience of reality as metadata local to us. It does this by presenting a convincing argument between two self-evident yet mathematically uncomfortable observations, only the mathematics of which can be adjusted: 

  1. It is logically demonstrable that the geometry and set definition of the Spinor as currently defined in Quantum Mechanics does not treat the data describing our conscious individual experience of reality as local metadata when it clearly should. 

  2. It is also demonstrable that if the Spinor’s structure were to be modified to reflect the calculation of reality as if reality were nonlocal and the data describing observed reality were considered local, that the computation on progressive-looking data* would be quantum across the Standard Model. 

Therefore, if Quantum Mechanics is showing itself as coherent across the Standard Model when the necessary set definitional terms and structure of the Spinor are changed to calculate the data describing reality as being local and reality as data (even if only when calculating progressive-looking data), then the mathematical terms of Spinors are, logically, to be changed to reflect this. 

The currently held assumption in QM is that it calculates and should calculate the data describing reality as nonlocal. This, the Dot Theory positions, is in essence the error in the chosen mathematical expression of the function of E relative to the observer in the General Theory of Relativity.  

For QM to be able to become continuous with the Standard Model, the GTR must reciprocally raise its equation and tensor relation by one function to E¯>=m⊙c3 (with⊙ = the total sum of observational biases/constants (gravitational constant or cumulative lensing/c).

If QM becomes continuous with the Standard Model by making those defining corrections, the defining corrections suggested must logically be reflected accordingly. 

However, neither the terms of our physics nor its mathematics currently do this. 

The psychological and cultural challenge inherent to the treatment of reality as being hyperdata (if reality were considered as data, the data describing reality would, relatively speaking to our current conscious relationship to reality and lived experience, be described as metadata) may conceptually be significant, but that does not make it incorrect. 

The computation of reality being obviously quantum when considered nonlocal and when its data considered as local, is the only evidence required to at least intellectually force the review of and change to Spinor mathematics needed to make Quantum Mechanics work across the Standard Model and for our mathematical relationship to the function of E to be elevated in the GTR. 

Making this change in the structure of Spinor mathematics in Quantum Mechanics for predictive purposes, is technically conditional on that the computational perspective taken on the data must be retrospectively prospective, i.e. it must have been shown to have been true under similar circumstances in the past. 

This (that the data used to predict using QM is data describing something like it that has been true like it before, i.e. it cannot make “fantastical” predictions), as far as understood by the author, is the only condition imposed by the mathematical correction suggested to the current mathematics by the Dot theory and to the world of physics by this shift in mathematical perspective.  

In normo-verbal terms: The Dot theory does not exclude the possibility that there are other realities to be considered but states that reality is quantum, if its observations are treated as nonlocal and its perspective is progressive. 

 

*Progressive-looking data is data that is selected based on it having a specific relationship between the data itself and the computational perspective taken on the data. It consists of search-specified cluster-pattern recognition-based data that can be referenced with statistical confidence to improve trajectory accuracy to a specifically defined target when compared to previous attempts for similarly defined targets. 

 

 

Reality is the matter you can perceive  

Let it be 

Not that you can do any different really 

But if you try not to 

it will be something less.