Notes
2025/10/28
This continues the exploration from week 18 and week 19 about a calculus for dealing with paradoxical fixed points. The previous approach used ordered sequences with explicit step counters and phases to manage shifts. This week, I've simplified the design by switching to unordered maps that track relationships between values rather than temporal sequences.
Instead of representing a time loop as an ordered sequence like ['x'; 'y'], the new implementation uses a map structure that represents the relationships between values. The simple liar paradox is now represented as ('x' & 'y'), which conceptually means “a value that oscillates between 'x' and 'y'”, in other words, a value that is both 'x' and 'y'.
Internally, this is stored as a time loop map: ('x': 'x', 'y': 'y'). This might look redundant at first. After all, why map each value to itself? This will become clear when we look at nested examples, because associating values with their origins is crucial for reasoning about paradoxical sub-expressions.
Before diving into examples, let's clarify the notation:
('x' & 'y') is shorthand for the time loop map ('x': 'x', 'y': 'y')('x': 'y', 'y': 'x') shows transformations: the key 'x' produces the value 'y', and the key 'y' produces the value 'x'('x': 'x', 'y': 'y')), we have a fixed point'x' has no map structure—it's just 'x'The keys track where each value came from, which is crucial for reasoning about how values transform through function application.
Let's trace through the simple liar's paradox to see how the map-based approach works:
v = ('x' & 'y')
if v == 'x':
'y'
else:
'x'
When we compare a time loop to a regular value, we lift the comparison across all entries in the map. Using Church encoding where true = (x, y) => x and false = (x, y) => y, the comparison ('x' & 'y') == 'x' evaluates to:
('x': 'x' == 'x', 'y': 'y' == 'x')
...which evaluates to:
('x': true, 'y': false)
When this map of booleans is applied to the two branches 'y' and 'x', each function selects the appropriate branch:
'x': true('y', 'x') selects 'y''y': false('y', 'x') selects 'x'This gives us ('x': 'y', 'y': 'x') as the intermediate result. Notice how the keys track where each result came from: the 'x' case produced 'y', and the 'y' case produced 'x'. This is why the map structure associates values with their origins—it's crucial for tracking transformations.
But we're not done yet. The key insight is normalization. After evaluating an expression, we check whether the resulting map represents a loop back to the original values. We simply check whether the keys and values of the map are the same set—just potentially reordered.
For our simple liar, we have ('x': 'y', 'y': 'x'). The normalization process detects that this is a loop: the set {'x', 'y'} appears as both keys and values. When a loop is detected, the map is transformed so each key maps to itself: ('x': 'x', 'y': 'y'), which is just ('x' & 'y').
This normalized form means we're back where we started—giving us a true fixed point for v.
Like the sequential approach, the map-based approach uses a notion of "rank" (or depth) to handle nested time loops. A regular value has rank 0, a simple time loop like ('x' & 'y') has rank 1, and a nested time loop has rank 2 or higher.
When comparing two time loop maps of the same rank, we zip them together key by key. When comparing values of different ranks, the lower-rank value is lifted into the structure of the higher-rank value. This ensures compositional reasoning about nested paradoxes.
Now that we've seen the basic mechanism, let's explore what happens when we compare against a time loop instead of a simple value.
v = ('y' & ('x' & 'y'))
if v == ('x' & 'y'):
'y'
else:
'x'
The left side has rank 2, the right side has rank 1. We lift the right side and compare each entry:
'y' compared to ('x' & 'y') produces ('x': false, 'y': true)('x' & 'y') compared to ('x' & 'y') produces ('x': true, 'y': true), which simplifies to just trueThis gives us a nested structure: ('y': ('x': false, 'y': true), ('x' & 'y'): true). We can now apply this to the branches:
'y' key: ('x': false, 'y': true) applied to ('y', 'x') gives ('x': 'x', 'y': 'y')('x' & 'y') key: true applied to ('y', 'x') gives 'y'This produces ('y': ('x': 'x', 'y': 'y'), ('x' & 'y'): 'y'), which is equivalent to ('y': ('x' & 'y'), ('x' & 'y'): 'y'). After normalization, each key maps back to itself, ('y': 'y', ('x' & 'y'): ('x' & 'y')), giving us our original value v, ('y' & ('x' & 'y')).
This example shows how nested conditionals interact with time loops:
v = ('x' & ('x' & 'y'))
if v == ('x' & 'y'):
'x'
else:
if v == 'x':
'y'
else:
'x'
Following the same pattern of rank-based lifting and evaluation, the outer comparison produces ('x': ('x': true, 'y': false), ('x' & 'y'): true), and the inner comparison produces ('x': true, ('x' & 'y'): ('x': true, 'y': false)).
The inner conditional evaluates to ('x': 'y', ('x' & 'y'): ('x': 'y', 'y': 'x')).
The outer conditional then selects between 'x' and the inner result, which is zipped together with the outer map:
'x' key: the boolean map ('x': true, 'y': false) selects between 'x' and 'y', giving ('x': 'x', 'y': 'y')('x' & 'y') key: true selects 'x' without needing to look at the inner conditionalAfter normalization, we get back ('x' & ('x' & 'y')).
This example demonstrates compositional reasoning:
v = ('x' & 'y')
if v == 'x':
'y'
else:
if v == 'x':
'z'
else:
'x'
Even though ('x' & 'y') appears twice as v, the synchronized evaluation ensures consistent behavior. Both comparisons produce the same map ('x': true, 'y': false):
'x' key: both conditionals see true, so we take the outer then-branch, returning 'y''y' key: both conditionals see false, so we take the outer else-branch, then the inner else-branch, returning 'x'The result is ('x': 'y', 'y': 'x'), which normalizes to ('x': 'x', 'y': 'y'). The middle branch (inner then-branch returning 'z') is never visited because the conditions are synchronized by zipping together the inner and outer conditionals via their keys.
The map-based approach eliminates the need to track explicit step counters and phases. Instead of thinking about "time step 0" versus "time step 1," we think about the relationships between values: when we have 'x', we get 'y', and when we have 'y', we get 'x'.
The normalization process automatically detects when we've completed a cycle and returned to the original configuration. This makes the semantics simpler and more declarative—we're describing relationships rather than sequences of operations.
The unordered nature of maps also makes it clearer that time loops aren't lists or sequences that can be indexed or iterated through. They're genuinely oscillating values that exist in a superposition of states, with the map structure tracking how those states relate to each other.
The implementation includes a simplify function that collapses trivial time loops. If all values in a map are identical, the map is replaced by that single value. For example, ('x': true, 'y': true) simplifies to just true. This effectively collapses a time loop where all possibilities lead to the same outcome.
This simplification happens recursively, so nested structures like ('a': ('x': 'foo', 'y': 'foo'), 'b': ('z': 'foo')) would simplify to ('a': 'foo', 'b': 'foo'), which further simplifies to just 'foo'. This process happens automatically during evaluation and is particularly important for cleaning up boolean maps before they're used in conditionals.
The map-based representation provides a cleaner foundation for the time loop calculus. The absence of explicit shifting operations makes the evaluation rules more straightforward, and the normalization process naturally identifies fixed points.
Future work might explore whether this map-based approach can be extended to handle more complex forms of self-reference, such as monotonically growing fixed points of the form x -> x + 1. The rank-based lifting already provides a foundation for nested paradoxes, but there may be other patterns that require additional machinery.
Another obvious next step is to work on an actual decision procedure for deriving paradoxical fixed points, which should be possible at least for the finitely paradoxical fixed points discussed here.