Dynamical Systems via Domains: Toward a Unified Foundation of Symbolic and Non-symbolic Computation
Levin Hornischer
Abstract:
Non-symbolic computation (as, e.g., in biological and artificial neural networks) is astonishingly good at learning and processing noisy real-world data. However, it lacks the kind of understanding we have of symbolic computation (as, e.g., specified by programming languages). Just like symbolic computation, also non-symbolic computation needs a semantics---or behavior description---to achieve structural understanding. Domain theory has provided this for symbolic computation, and this thesis is about extending it to non-symbolic computation.
Symbolic and non-symbolic computation can be described in a unified framework as state-discrete and state-continuous dynamical systems, respectively. So we need a semantics for dynamical systems: assigning to a dynamical system a `domain' which describes the system's behavior. A domain is a set of elements ordered by information containment. In our case, the elements are observable behaviors of the system, and one behavior x is informationally contained in another y if what can be learned about the system from x can also be learned from y. Finitely observable behaviors then are `compact' or `directly accessible' elements that can approximate the infinite limit behaviors of the system.
In part 1 of the thesis, we provide this domain-theoretic semantics for the `symbolic' state-discrete systems (i.e., labeled transition systems). And in part 2, we do this for the `non-symbolic' state-continuous systems (known from ergodic theory). This is a proper semantics in that the constructions form functors (in the sense of category theory) and, once appropriately formulated, even adjunctions. Stronger yet, we obtain a categorical equivalence in the continuous case: a complete intertranslatability between systems and domains.
In part 3, we explore how this semantics relates the two types of computation. It suggests that non-symbolic computation is the limit of symbolic computation (in the `profinite' sense). Conversely, if the system's behavior is fairly stable, it may be described as realizing symbolic computation. The concepts of ergodicity and (algorithmic) randomness help to use and achieve this stability. In the last chapter, we then study the general concept of stability: A novel interpretation of Fitch's paradox reveals that stability cannot jointly have four desirable properties. This has implications for AI-safety: After all, for a neural network to be safe, we expect it to be stable in the sense of computing the same output on sufficiently similar inputs. The theme here is to explore new applications of established philosophical tools (mostly from epistemology) in the non-symbolic computation of modern AI.